Skip to content

Conversation

@Snider
Copy link
Owner

@Snider Snider commented Dec 25, 2025

Add comprehensive networking analytics to support peer selection based
on performance and trust metrics for KD-Tree based NAT routing:

  • Add kdtree_analytics.go with TreeAnalytics, PeerAnalytics,
    DistributionStats, NATRoutingMetrics, TrustMetrics, and
    QualityWeights structures
  • Track query/insert/delete operations with timing statistics
  • Track per-peer selection frequency and average distances
  • Add PeerQualityScore() for composite peer ranking
  • Add ComputeTrustScore() for reputation-based selection
  • Add distribution statistics (min, max, mean, median, percentiles)
  • Add feature normalization helpers for multi-dimensional peer data

WASM/TypeScript integration:

  • Expose all analytics via WASM bindings
  • Update TypeScript definitions with full type coverage
  • Update loader.js with new API methods
  • Update TypeScript demo to showcase analytics features

Includes comprehensive test coverage for all analytics functionality.

Add comprehensive networking analytics to support peer selection based
on performance and trust metrics for KD-Tree based NAT routing:

- Add kdtree_analytics.go with TreeAnalytics, PeerAnalytics,
  DistributionStats, NATRoutingMetrics, TrustMetrics, and
  QualityWeights structures
- Track query/insert/delete operations with timing statistics
- Track per-peer selection frequency and average distances
- Add PeerQualityScore() for composite peer ranking
- Add ComputeTrustScore() for reputation-based selection
- Add distribution statistics (min, max, mean, median, percentiles)
- Add feature normalization helpers for multi-dimensional peer data

WASM/TypeScript integration:
- Expose all analytics via WASM bindings
- Update TypeScript definitions with full type coverage
- Update loader.js with new API methods
- Update TypeScript demo to showcase analytics features

Includes comprehensive test coverage for all analytics functionality.
Add comprehensive DNS tools module for network analysis:

DNS Lookup functionality:
- Support for A, AAAA, MX, TXT, NS, CNAME, SOA, PTR, SRV, CAA records
- DNSLookup() and DNSLookupAll() for single/complete lookups
- Configurable timeouts
- Structured result types for all record types

RDAP (new-style WHOIS) support:
- RDAPLookupDomain() for domain registration data
- RDAPLookupIP() for IP address information
- RDAPLookupASN() for autonomous system info
- Built-in server registry for common TLDs and RIRs
- ParseRDAPResponse() for extracting key domain info

External tool link generators:
- GetExternalToolLinks() - 20+ links for domain analysis
- GetExternalToolLinksIP() - IP-specific analysis tools
- GetExternalToolLinksEmail() - Email/domain verification

Tools include: MXToolbox (DNS, MX, SPF, DMARC, DKIM, blacklist),
DNSChecker, ViewDNS, IntoDNS, DNSViz, SecurityTrails, SSL Labs,
Shodan, Censys, IPInfo, AbuseIPDB, VirusTotal, and more.

WASM bindings expose link generators and RDAP URL builders
for use in TypeScript/browser environments.
- Add support for 13 additional record types: ALIAS, RP, SSHFP, TLSA,
  DS, DNSKEY, NAPTR, LOC, HINFO, CERT, SMIMEA, WR (Web Redirect), SPF
- Add GetDNSRecordTypeInfo() for metadata with RFC references
- Add GetCommonDNSRecordTypes() for commonly used types
- Add structured types for CAA, SSHFP, TLSA, DS, DNSKEY, NAPTR, RP,
  LOC, ALIAS, and WebRedirect records
- Export new functions in WASM bindings
- Update TypeScript definitions and loader.js
- Add comprehensive tests for new record types
@coderabbitai
Copy link

coderabbitai bot commented Dec 25, 2025

📝 Walkthrough

Summary by CodeRabbit

Release Notes

  • New Features
    • Added comprehensive DNS lookup support for multiple record types (A, AAAA, MX, TXT, NS, CNAME, SOA, SRV, CAA, and more).
    • Added RDAP (Registration Data Access Protocol) lookups for domain, IP, and ASN data retrieval.
    • Added external tool link generation for domains, IP addresses, and email addresses.
    • Added KD-tree analytics including peer statistics, query tracking, and distance distribution analysis.
    • Added peer quality scoring and trust metrics calculation with configurable weighting.
    • Added feature normalisation and weighted peer feature computation for improved peer selection.
    • Added NAT routing metrics and type classification for network diagnostics.

✏️ Tip: You can customize this high-level summary in your review settings.

Walkthrough

The pull request introduces comprehensive DNS and RDAP tooling capabilities alongside extensive KDTree analytics tracking. New functionality includes DNS record type abstractions, domain and IP lookups with RDAP integration, external tool link generation, and peer-based analytics with quality and trust scoring, all exposed through WASM bindings for JavaScript consumption.

Changes

Cohort / File(s) Summary
DNS and RDAP Tooling
dns_tools.go, dns_tools_test.go
Introduces 20+ DNS record types (A, AAAA, MX, TXT, SRV, SOA, CAA, SSHFP, TLSA, DS, DNSKEY, NAPTR, RP, LOC, ALIAS, WebRedirect), lookup functions with timeout support, RDAP data structures and lookup endpoints (Domain, IP, ASN), external tool link generators for multiple services (MXToolbox, DNSChecker, WhoIs, SecurityTrails, etc.), and domain info parsing from RDAP responses. Comprehensive test coverage validates all record types, lookup flows, RDAP structures, and tool link generation.
KDTree Analytics Infrastructure
kdtree.go, kdtree_analytics.go, kdtree_analytics_test.go
Adds time-based query tracking, per-peer selection recording, and operation counters (Insert, Delete, Rebuild) to KDTree. Introduces TreeAnalytics with atomic counters for operation statistics and snapshots, PeerAnalytics for per-peer hit counts and distance tracking, NAT routing metrics (connectivity, symmetry, relay probability), NATTypeClassification, TrustMetrics with reputation and transaction tracking, distribution statistics with percentile calculations, feature normalisation and weighting for peer quality scoring. Provides public API methods for analytics retrieval, peer stats, top peers, distance distributions, and reset capabilities. Extensive tests validate operation tracking, peer selection recording, distribution calculations, and quality/trust scoring logic.
WASM and NPM Bindings
wasm/main.go, npm/poindexter-wasm/index.d.ts, npm/poindexter-wasm/loader.js
Exposes all analytics, DNS/RDAP, and peer quality APIs to JavaScript through WASM. Updates TypeScript declarations with comprehensive type definitions for TreeAnalytics, NATRoutingMetrics, TrustMetrics, DNS record types, RDAP structures, and feature handling. Implements Go-to-JS translation helpers (externalToolLinksToJS, RDAP URL builders) and registers 20+ new public API methods under px-prefixed names. Modifies exportJSON to include full point data (id, coordinates, value) plus metadata.
Example Demonstrations
examples/wasm-browser-ts/src/main.ts
Expands demo from minimal KDTree operations to comprehensive analytics workflow, including repeated nearest/k-nearest queries, analytics retrieval, peer statistics, top peers, axis distributions, NAT routing analysis, trust score calculations, feature normalisation, custom weighting, and analytics reset sequences, with expanded console instrumentation.

Sequence Diagram(s)

sequenceDiagram
    participant App as Application
    participant KDT as KDTree
    participant TA as TreeAnalytics
    participant PA as PeerAnalytics

    App->>KDT: Nearest(query)
    activate KDT
    Note over KDT: Start timing
    KDT->>TA: Record query start
    KDT->>KDT: Perform nearest search
    KDT->>PA: Record peer selection
    deactivate KDT
    KDT->>TA: Record query duration
    activate TA
    TA->>TA: Update query count,<br/>timing metrics
    deactivate TA
    KDT-->>App: Return result + timing

    App->>KDT: KNearest(query, k)
    activate KDT
    KDT->>TA: Record query start
    KDT->>KDT: Perform k-nearest search
    loop For each k result
        KDT->>PA: Record peer selection
    end
    KDT->>TA: Record query duration
    deactivate KDT
    activate PA
    PA->>PA: Increment hit counts,<br/>update distances
    deactivate PA
    KDT-->>App: Return k results

    App->>KDT: GetAnalyticsSnapshot()
    activate KDT
    KDT->>TA: Snapshot query stats
    deactivate KDT
    KDT-->>App: TreeAnalyticsSnapshot

    App->>KDT: GetPeerStats()
    activate KDT
    KDT->>PA: Calculate stats per peer
    deactivate KDT
    KDT-->>App: []PeerStats sorted by frequency
Loading
sequenceDiagram
    participant User as User / Client
    participant DNS as DNS Resolver
    participant RDAP as RDAP Server
    participant Tools as External Tools

    User->>DNS: DNSLookupAll(domain)
    activate DNS
    par A Lookup
        DNS->>DNS: Lookup A records
    and AAAA Lookup
        DNS->>DNS: Lookup AAAA records
    and MX Lookup
        DNS->>DNS: Lookup MX records<br/>(sort by priority)
    and NS Lookup
        DNS->>DNS: Lookup NS records
    and TXT Lookup
        DNS->>DNS: Lookup TXT records
    and SOA Lookup
        DNS->>DNS: Lookup SOA record
    and CNAME Lookup
        DNS->>DNS: Lookup CNAME record
    end
    deactivate DNS
    DNS-->>User: CompleteDNSLookup<br/>(all records + timing)

    User->>RDAP: RDAPLookupDomain(domain)
    activate RDAP
    RDAP->>RDAP: Query RDAP endpoint
    deactivate RDAP
    RDAP-->>User: RDAPResponse<br/>(registrar, dates, status)

    User->>DNS: ParseRDAPResponse(rdapResp)
    activate DNS
    DNS->>DNS: Extract registrar,<br/>dates, DNSSEC,<br/>nameservers
    deactivate DNS
    DNS-->>User: ParsedDomainInfo

    User->>Tools: GetExternalToolLinks(domain)
    activate Tools
    Tools->>Tools: Generate URLs for<br/>MXToolbox, DNSChecker,<br/>WhoIs, SecurityTrails, etc.
    deactivate Tools
    Tools-->>User: ExternalToolLinks<br/>(mapped service URLs)
Loading

Possibly related PRs

  • Kd tree peer finding #2: Directly related through shared KDTree analytics types, peer-tracking methods, analytics WASM bindings, and JavaScript API exposure for quality/trust scoring and analytics retrieval.

Poem

🐇 Hops of joy through DNS domains deep,
RDAP records and secrets we keep,
KDTree analytics track every peer,
Trust scores and quality rising so clear,
From Go to JavaScript, connections align,
A network of tools working fine!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 43.70% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately reflects the main objective of the PR, which is to expose networking analytics for KD-Tree NAT routing. It clearly summarises the primary change.
Description check ✅ Passed The description is comprehensive and directly related to the changeset, detailing the analytics structures, features, and WASM/TypeScript integration introduced in the PR.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch claude/expose-networking-analytics-LNUmx

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link

Summary of Changes

Hello @Snider, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the KD-Tree implementation by integrating a robust analytics framework and advanced peer selection mechanisms. The primary goal is to enable data-driven decisions for NAT routing, allowing for more intelligent peer selection based on performance, trust, and network characteristics. Additionally, a new set of DNS and network diagnostic tools has been introduced, expanding the utility of the poindexter package.

Highlights

  • Comprehensive KD-Tree Analytics: Introduced TreeAnalytics to track query, insert, and delete operations with detailed timing statistics (min, max, average query time).
  • Peer Selection & Trust Metrics: Added PeerAnalytics to track per-peer selection frequency and average distances, along with PeerQualityScore() for composite peer ranking based on network metrics and ComputeTrustScore() for reputation-based selection.
  • Distribution Analysis & Feature Normalization: Implemented DistributionStats and AxisDistribution for statistical analysis of data distributions within the KD-Tree, and provided helpers for feature normalization and weighting for multi-dimensional peer data.
  • WASM/TypeScript Integration: All new analytics and scoring functionalities are exposed via WASM bindings, with updated TypeScript definitions and a demo showcasing these features.
  • New DNS & Network Tools: Introduced a new dns_tools.go module providing extensive DNS record type definitions, DNS lookup functions, RDAP (WHOIS) lookups, and external tool link generation for domains, IPs, and emails.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@codecov
Copy link

codecov bot commented Dec 25, 2025

Codecov Report

❌ Patch coverage is 62.11180% with 305 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
dns_tools.go 33.25% 267 Missing ⚠️
kdtree_analytics.go 93.82% 15 Missing and 5 partials ⚠️
kdtree.go 77.77% 15 Missing and 3 partials ⚠️

📢 Thoughts on this report? Let us know!

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive set of analytics and scoring mechanisms for peer selection in the KD-Tree, which is a great enhancement. The new features are well-tested and exposed through the WASM API with corresponding TypeScript definitions.

I have a few suggestions, mostly around maintainability and code clarity:

  • The new file kdtree_analytics.go is quite large and could be split into smaller, more focused files.
  • There are a few magic numbers in the scoring and feature generation logic that could be extracted into named constants.
  • The new dns_tools.go file is a significant addition that broadens the PR's scope. It might be worth considering its placement in the long term.

Overall, this is a solid contribution with extensive new functionality.

// GetCommonDNSRecordTypes returns only commonly used record types
func GetCommonDNSRecordTypes() []DNSRecordType {
info := GetDNSRecordTypeInfo()
result := make([]DNSRecordType, 0)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The slice result is initialized with a size of 0 and then appended to in a loop. A more idiomatic and performant approach in Go is to pre-allocate the slice with a known capacity to avoid reallocations, especially when the final size can be estimated.

Suggested change
result := make([]DNSRecordType, 0)
result := make([]DNSRecordType, 0, len(info))

Comment on lines +215 to +223
func GetAllDNSRecordTypes() []DNSRecordType {
return []DNSRecordType{
DNSRecordA, DNSRecordAAAA, DNSRecordCNAME, DNSRecordMX, DNSRecordTXT,
DNSRecordNS, DNSRecordSOA, DNSRecordPTR, DNSRecordSRV, DNSRecordCAA,
DNSRecordALIAS, DNSRecordRP, DNSRecordSSHFP, DNSRecordTLSA, DNSRecordDS,
DNSRecordDNSKEY, DNSRecordNAPTR, DNSRecordLOC, DNSRecordHINFO, DNSRecordCERT,
DNSRecordSMIMEA, DNSRecordSPF, DNSRecordWR,
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

GetAllDNSRecordTypes returns a hardcoded list of record types. This is fragile and can get out of sync if GetDNSRecordTypeInfo is updated. To improve maintainability, this function should derive its list from GetDNSRecordTypeInfo.

func GetAllDNSRecordTypes() []DNSRecordType {
	info := GetDNSRecordTypeInfo()
	result := make([]DNSRecordType, len(info))
	for i, r := range info {
		result[i] = r.Type
	}
	return result
}

@@ -0,0 +1,700 @@
package poindexter

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This file is over 700 lines long and handles several distinct responsibilities, including tree operational analytics, peer selection analytics, statistical utilities, peer quality scoring, and feature normalization. For better maintainability and separation of concerns, consider splitting this into multiple, smaller files. For example:

  • tree_analytics.go (TreeAnalytics)
  • peer_analytics.go (PeerAnalytics)
  • peer_scoring.go (NATRoutingMetrics, TrustMetrics, scoring functions)
  • peer_features.go (feature definitions, normalization)
  • stats.go (DistributionStats)

successRate := float64(t.SuccessfulTransactions) / float64(total)

// Volume confidence (more transactions = more confident)
volumeConfidence := 1 - 1/(1+float64(total)/10)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The number 10 used here to calculate volumeConfidence is a magic number. It would be more readable and maintainable to define it as a named constant with a comment explaining its purpose, for example const trustConfidenceFactor = 10.0.

recencyBonus := 0.0
if !t.LastSuccessAt.IsZero() {
hoursSince := time.Since(t.LastSuccessAt).Hours()
recencyBonus = 0.1 * math.Exp(-hoursSince/168) // Decays over ~1 week

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The number 168 (24 * 7, i.e., hours in a week) used for the recency bonus decay is a magic number. It would be clearer to define this as a named constant, e.g., recencyBonusDecayHours, to make the logic self-explanatory.

Comment on lines +634 to +637
100 - f.BandwidthMbps, // Invert: higher bandwidth = lower value (capped at 100)
f.PacketLossRate,
100 - f.ConnectivityPct, // Invert: higher connectivity = lower value
1 - f.NATScore, // Invert: higher NAT score = lower value

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The hardcoded value 100 is used to invert BandwidthMbps and ConnectivityPct. This assumes a maximum value of 100, which might not always hold true and is implicitly coupled with the ranges in DefaultPeerFeatureRanges. To make this more robust, consider defining this maximum value as a constant or deriving it from the feature range definitions to avoid inconsistencies.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
dns_tools_test.go (1)

709-731: Consider testing the actual RDAP URL builder functions.

These tests verify URL format expectations but don't actually call the buildRDAPDomainURL, buildRDAPIPURL, or buildRDAPASNURL functions. The tests are tautological - they check that a manually constructed URL has the expected prefix.

🔎 Suggested improvement
 func TestBuildRDAPURLs(t *testing.T) {
-	// These test the URL structure, not actual lookups
-
-	// Domain URL
-	domain := "example.com"
-	expectedDomainPrefix := "https://rdap.org/domain/"
-	if !strings.HasPrefix("https://rdap.org/domain/"+domain, expectedDomainPrefix) {
-		t.Error("Domain URL format is incorrect")
-	}
-
-	// IP URL
-	ip := "8.8.8.8"
-	expectedIPPrefix := "https://rdap.org/ip/"
-	if !strings.HasPrefix("https://rdap.org/ip/"+ip, expectedIPPrefix) {
-		t.Error("IP URL format is incorrect")
-	}
-
-	// ASN URL
-	asn := "15169"
-	expectedASNPrefix := "https://rdap.org/autnum/"
-	if !strings.HasPrefix("https://rdap.org/autnum/"+asn, expectedASNPrefix) {
-		t.Error("ASN URL format is incorrect")
-	}
+	// Test actual URL builder functions
+	domainURL := BuildRDAPDomainURL("example.com")
+	if domainURL != "https://rdap.org/domain/example.com" {
+		t.Errorf("Domain URL incorrect: %s", domainURL)
+	}
+
+	ipURL := BuildRDAPIPURL("8.8.8.8")
+	if ipURL != "https://rdap.org/ip/8.8.8.8" {
+		t.Errorf("IP URL incorrect: %s", ipURL)
+	}
+
+	asnURL := BuildRDAPASNURL("AS15169")
+	if asnURL != "https://rdap.org/autnum/15169" {
+		t.Errorf("ASN URL incorrect: %s", asnURL)
+	}
 }
dns_tools.go (3)

202-212: Consider pre-allocating the slice capacity.

The function iterates to filter common types but starts with zero capacity. Pre-allocating based on expected count would avoid reallocations.

🔎 Proposed optimisation
 func GetCommonDNSRecordTypes() []DNSRecordType {
 	info := GetDNSRecordTypeInfo()
-	result := make([]DNSRecordType, 0)
+	result := make([]DNSRecordType, 0, len(info)/2) // ~half are common
 	for _, r := range info {
 		if r.Common {
 			result = append(result, r.Type)
 		}
 	}
 	return result
 }

225-236: Minor formatting inconsistency.

The LookupTimeMs field at line 233 has inconsistent spacing compared to other fields in the struct.

🔎 Proposed fix
 type DNSLookupResult struct {
 	Domain      string      `json:"domain"`
 	QueryType   string      `json:"queryType"`
 	Records     []DNSRecord `json:"records"`
 	MXRecords   []MXRecord  `json:"mxRecords,omitempty"`
 	SRVRecords  []SRVRecord `json:"srvRecords,omitempty"`
 	SOARecord   *SOARecord  `json:"soaRecord,omitempty"`
-	LookupTimeMs int64      `json:"lookupTimeMs"`
+	LookupTimeMs int64       `json:"lookupTimeMs"`
 	Error       string      `json:"error,omitempty"`
 	Timestamp   time.Time   `json:"timestamp"`
 }

696-744: Consider extracting common RDAP HTTP logic.

The three RDAP lookup functions (RDAPLookupDomainWithTimeout, RDAPLookupIPWithTimeout, RDAPLookupASNWithTimeout) share nearly identical HTTP request/response handling. Extracting this into a helper would reduce duplication and centralise error handling.

🔎 Proposed helper function
func rdapHTTPGet(serverURL string, timeout time.Duration) ([]byte, error) {
	client := &http.Client{Timeout: timeout}
	resp, err := client.Get(serverURL)
	if err != nil {
		return nil, fmt.Errorf("RDAP request failed: %w", err)
	}
	defer func() { _ = resp.Body.Close() }()

	body, err := io.ReadAll(resp.Body)
	if err != nil {
		return nil, fmt.Errorf("failed to read response: %w", err)
	}

	if resp.StatusCode != http.StatusOK {
		return body, fmt.Errorf("RDAP server returned status %d", resp.StatusCode)
	}

	return body, nil
}

Each RDAP function could then call this helper and handle the result consistently.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 299b01e and 298791e.

📒 Files selected for processing (9)
  • dns_tools.go
  • dns_tools_test.go
  • examples/wasm-browser-ts/src/main.ts
  • kdtree.go
  • kdtree_analytics.go
  • kdtree_analytics_test.go
  • npm/poindexter-wasm/index.d.ts
  • npm/poindexter-wasm/loader.js
  • wasm/main.go
🧰 Additional context used
🧬 Code graph analysis (5)
examples/wasm-browser-ts/src/main.ts (1)
npm/poindexter-wasm/smoke.mjs (3)
  • tree (16-16)
  • px (8-12)
  • nn (19-19)
kdtree_analytics_test.go (1)
kdtree_analytics.go (17)
  • NewTreeAnalytics (30-36)
  • NewPeerAnalytics (144-150)
  • ComputeDistributionStats (288-343)
  • ComputeAxisDistributions (371-396)
  • NATRoutingMetrics (399-420)
  • NATTypeOpen (426-426)
  • PeerQualityScore (441-472)
  • QualityWeights (475-485)
  • DefaultQualityWeights (494-506)
  • TrustMetrics (535-554)
  • ComputeTrustScore (557-587)
  • StandardPeerFeatures (615-624)
  • DefaultPeerFeatureRanges (661-674)
  • NormalizePeerFeatures (677-687)
  • WeightedPeerFeatures (690-700)
  • StandardFeatureLabels (642-653)
  • PeerAnalytics (132-141)
npm/poindexter-wasm/index.d.ts (2)
kdtree_analytics.go (10)
  • TreeAnalytics (13-27)
  • PeerStats (262-267)
  • DistributionStats (270-285)
  • AxisDistribution (364-368)
  • NATTypeClassification (423-423)
  • NATRoutingMetrics (399-420)
  • QualityWeights (475-485)
  • TrustMetrics (535-554)
  • FeatureRanges (656-658)
  • StandardPeerFeatures (615-624)
dns_tools.go (24)
  • DNSRecordType (21-21)
  • CAARecord (86-90)
  • SSHFPRecord (93-97)
  • TLSARecord (100-105)
  • DSRecord (108-113)
  • DNSKEYRecord (116-121)
  • NAPTRRecord (124-131)
  • RPRecord (134-137)
  • LOCRecord (140-147)
  • ALIASRecord (150-152)
  • WebRedirectRecord (155-159)
  • ExternalToolLinks (802-848)
  • RDAPEvent (544-548)
  • RDAPEntity (551-559)
  • RDAPLink (574-579)
  • RDAPRemark (582-586)
  • RDAPResponse (509-541)
  • ParsedDomainInfo (951-960)
  • DNSLookupResult (226-236)
  • DNSRecord (53-58)
  • MXRecord (61-64)
  • SRVRecord (67-72)
  • SOARecord (75-83)
  • CompleteDNSLookup (239-251)
kdtree_analytics.go (2)
kdtree.go (1)
  • KDPoint (26-30)
kdtree_helpers.go (1)
  • AxisStats (22-25)
dns_tools.go (2)
npm/poindexter-wasm/index.d.ts (10)
  • DNSRecordType (197-222)
  • DNSRecordTypeInfo (225-231)
  • DNSRecord (465-470)
  • MXRecord (473-476)
  • SRVRecord (479-484)
  • SOARecord (487-495)
  • DNSLookupResult (452-462)
  • RDAPResponse (405-437)
  • ExternalToolLinks (310-355)
  • ParsedDomainInfo (440-449)
poindexter.go (1)
  • Version (5-7)
🪛 GitHub Actions: CI
dns_tools.go

[error] 666-666: golangci-lint error: dns_tools.go:666:23: Error return value of resp.Body.Close is not checked (errcheck).

🪛 GitHub Check: build-test-gonum
dns_tools.go

[failure] 772-772:
Error return value of resp.Body.Close is not checked (errcheck)


[failure] 721-721:
Error return value of resp.Body.Close is not checked (errcheck)


[failure] 666-666:
Error return value of resp.Body.Close is not checked (errcheck)

🪛 GitHub Check: build-test-wasm
dns_tools.go

[failure] 772-772:
Error return value of resp.Body.Close is not checked (errcheck)


[failure] 721-721:
Error return value of resp.Body.Close is not checked (errcheck)


[failure] 666-666:
Error return value of resp.Body.Close is not checked (errcheck)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-test-wasm
🔇 Additional comments (51)
kdtree_analytics_test.go (7)

1-7: LGTM - Clean test file setup.

The package declaration and imports are appropriate for a comprehensive test suite covering analytics functionality.


13-51: LGTM - Solid coverage of TreeAnalytics basic operations.

Tests correctly validate initial state, query counting, timing aggregation, and min/max tracking.


53-106: LGTM - Snapshot and reset behaviour well validated.

The tests appropriately verify that snapshots capture the correct point-in-time data and that reset clears all counters.


112-195: LGTM - Comprehensive PeerAnalytics test coverage.

Tests cover the full lifecycle of peer analytics including selection recording, average distance calculation with appropriate floating-point tolerance, sorting by selection count, and reset behaviour.


201-267: LGTM - Distribution statistics tests are mathematically correct.

The tests properly validate edge cases (empty, single value) and verify mean, median, variance, and percentiles with appropriate tolerances.


273-468: LGTM - NAT routing and trust scoring tests provide good boundary coverage.

The tests effectively validate quality scoring for extreme cases (perfect vs poor peers) and trust calculations for different peer histories (new, good reputation, bad reputation).


474-662: LGTM - Feature normalisation and KD-Tree integration tests are thorough.

The integration tests validate the complete analytics workflow including query tracking, peer selection recording, and importantly verify that Points() returns a defensive copy.

kdtree.go (8)

222-226: LGTM - Analytics fields added to KDTree struct.

The analytics and peerAnalytics fields are appropriately added as pointer types, allowing nil checks for optional disabling.


266-277: LGTM - Analytics initialised in NewKDTree constructor.

Analytics trackers are properly initialised alongside other tree fields.


293-303: LGTM - Analytics initialised in NewKDTreeFromDim constructor.

Consistent initialisation pattern with the primary constructor.


317-351: LGTM - Nearest method properly instrumented.

The defer pattern correctly captures query duration, and peer selections are recorded for both backend paths.


360-404: LGTM - KNearest method properly instrumented.

Query timing and per-neighbour selection recording are correctly implemented for both backend paths.


411-457: LGTM - Radius method properly instrumented.

Consistent instrumentation pattern with Nearest and KNearest methods.


474-529: LGTM - Insert and Delete operations tracked correctly.

Both mutation operations and backend rebuilds are properly recorded in analytics.


531-593: LGTM - Well-designed public analytics API.

The API correctly handles nil analytics, returns defensive copies where appropriate (Points()), and provides both raw access and snapshot methods for different use cases.

examples/wasm-browser-ts/src/main.ts (4)

16-34: LGTM - Basic KD-Tree operations demo.

Clear demonstration of tree creation, insertion, and query operations with descriptive peer IDs.


35-77: LGTM - Analytics demonstration is comprehensive.

The demo effectively showcases tree analytics, peer selection statistics, and axis distributions.


78-138: LGTM - NAT routing and trust score demonstrations.

The demo provides realistic example metrics and correctly uses the peer quality and trust scoring APIs.


139-169: LGTM - Feature normalisation and analytics reset demonstration.

The demo completes the analytics workflow by showing feature normalisation, weighting, and the reset functionality.

dns_tools_test.go (7)

1-6: LGTM - Clean test file setup.

Appropriate package and minimal imports for a unit test file.


12-130: LGTM - External tool links tests provide good coverage.

Tests validate link generation for domains, IPs, and emails with appropriate URL structure checks.


136-287: LGTM - DNS record type tests are comprehensive.

Tests validate both standard and extended record types, type metadata, and the distinction between common and all types.


289-340: LGTM - DNS lookup structure tests validate type definitions.

The tests confirm that the DNS lookup result structures can be properly instantiated and accessed.


346-461: LGTM - RDAP tests validate structure and parsing.

Tests cover response structure, event parsing, registrar extraction, DNSSEC detection, and server availability for common TLDs/RIRs.


467-666: LGTM - Record type structure tests are thorough.

Each DNS record type structure (MX, SRV, SOA, CAA, SSHFP, TLSA, DS, NAPTR, RP, LOC, ALIAS, WebRedirect) is validated.


672-703: LGTM - Helper function tests are well-structured.

The isNoSuchHostError tests use a table-driven approach with good coverage of expected and unexpected error patterns.

npm/poindexter-wasm/loader.js (2)

41-58: LGTM - PxTree class extended with analytics methods.

The analytics methods are cleanly added with consistent patterns and appropriate parameter passing.


87-115: LGTM - API surface properly extended.

The new API methods are well-organised into logical groups (core, statistics, NAT routing, DNS tools) and align with the WASM exports.

npm/poindexter-wasm/index.d.ts (5)

18-68: LGTM - Analytics type definitions are well-structured.

The TypeScript interfaces for TreeAnalytics, PeerStats, DistributionStats, and AxisDistribution accurately reflect the Go implementations with appropriate JSDoc comments.


70-157: LGTM - NAT routing and peer quality types are comprehensive.

The type definitions for NAT classification, routing metrics, quality weights, trust metrics, and feature handling accurately mirror the Go implementations.


163-180: LGTM - PxTree interface properly extended.

The analytics methods are added with correct parameter and return types, maintaining consistency with the loader implementation.


192-510: LGTM - DNS and RDAP type definitions are comprehensive.

The types cover all DNS record types (standard and extended), RDAP response structures, and external tool links with appropriate optional fields marked.


516-546: LGTM - PxAPI interface comprehensively extended.

The interface correctly declares all new API methods with appropriate parameter and return types, maintaining type safety across the WASM boundary.

wasm/main.go (5)

218-232: LGTM - exportJSON enhanced to include all points and backend info.

The export now provides a complete tree representation including dimension, length, backend type, and all points.


234-364: LGTM - Analytics WASM bindings are well-implemented.

The functions correctly handle tree ID lookups, marshal analytics data to JS-compatible maps, and properly convert timestamps to UnixMilli for JavaScript Date compatibility.


366-528: LGTM - Statistics and peer quality bindings are correctly implemented.

The functions properly extract metrics from JS objects, handle optional parameters, and return appropriately marshalled results.


534-700: LGTM - DNS tools WASM bindings are comprehensive.

The external tool links conversion helper correctly maps all fields, RDAP URL builders include ASN prefix normalisation, and DNS record type info is properly converted to JS arrays.


702-746: LGTM - All new functions properly exported.

The exports are well-organised into logical groups and use consistent naming with the px prefix.

kdtree_analytics.go (7)

1-9: LGTM - Clean file setup with appropriate imports.

The imports correctly include sync for mutex, sync/atomic for lock-free counters, and math for calculations.


11-114: LGTM - TreeAnalytics implementation is well-designed.

The use of atomic operations for counters and CAS loops for min/max tracking is appropriate for analytics. The "best-effort" approach for min/max is documented and acceptable for non-critical metrics.


131-259: LGTM - PeerAnalytics concurrency handling is correct.

The double-checked locking pattern for map entry creation and atomic float addition via CAS on bit representation are valid concurrent programming techniques.


269-361: LGTM - Distribution statistics implementation is mathematically sound.

The implementation correctly uses population variance (appropriate for full dataset analytics), and the percentile calculation with linear interpolation follows standard practice.


398-532: LGTM - NAT routing and peer quality scoring are well-designed.

The quality scoring normalises metrics appropriately, applies configurable weights, and clamps the final score to [0,1]. The NAT type scoring provides sensible rankings for different NAT traversal difficulties.


534-587: LGTM - Trust scoring algorithm is well-designed.

The algorithm appropriately handles new peers, combines multiple trust factors with sensible caps, and uses exponential decay for recency bonuses.


605-700: LGTM — Feature handling implementation is correct.

The feature inversion logic appropriately transforms "higher is better" metrics for distance-based KD-Tree queries. The scale01 helper function (defined in kdtree_helpers.go) is correctly used for normalisation. Normalisation and weighting functions are straightforward and correct.

dns_tools.go (6)

1-14: LGTM!

Clean import set using only standard library packages. All imports are utilised throughout the file.


23-50: LGTM!

DNS record type constants are well-documented and align with the TypeScript definitions in index.d.ts.


52-159: LGTM!

Struct definitions are well-organised with appropriate JSON tags and inline documentation for field semantics.


915-944: LGTM!

Email parsing logic correctly extracts the domain portion. The fallback to treating the input as a domain when no @ is present is sensible.


962-1006: LGTM!

The parsing logic correctly extracts key domain information from RDAP responses. Event action strings align with common RDAP conventions.


367-386: The existing comment on line 368 already documents that SRV records require the full service and protocol prefix in the domain name (e.g., _http._tcp.example.com). The implementation using empty strings for service and protocol to LookupSRV is correct per Go's net package design, which looks up the name directly when both are empty—this is intentional to support non-standard SRV publishing. The expectation is documented.

Comment on lines +659 to +689
client := &http.Client{Timeout: timeout}
resp, err := client.Get(serverURL)
if err != nil {
result.Error = fmt.Sprintf("RDAP request failed: %s", err.Error())
result.LookupTimeMs = time.Since(start).Milliseconds()
return result
}
defer resp.Body.Close()

body, err := io.ReadAll(resp.Body)
if err != nil {
result.Error = fmt.Sprintf("failed to read response: %s", err.Error())
result.LookupTimeMs = time.Since(start).Milliseconds()
return result
}

result.RawJSON = string(body)

if resp.StatusCode != http.StatusOK {
result.Error = fmt.Sprintf("RDAP server returned status %d", resp.StatusCode)
result.LookupTimeMs = time.Since(start).Milliseconds()
return result
}

if err := json.Unmarshal(body, &result); err != nil {
result.Error = fmt.Sprintf("failed to parse RDAP response: %s", err.Error())
}

result.LookupTimeMs = time.Since(start).Milliseconds()
return result
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unchecked resp.Body.Close error is failing CI.

The linter requires the error return value to be handled. This pattern repeats at lines 666, 721, and 772.

🔎 Proposed fix for RDAPLookupDomainWithTimeout
 	client := &http.Client{Timeout: timeout}
 	resp, err := client.Get(serverURL)
 	if err != nil {
 		result.Error = fmt.Sprintf("RDAP request failed: %s", err.Error())
 		result.LookupTimeMs = time.Since(start).Milliseconds()
 		return result
 	}
-	defer resp.Body.Close()
+	defer func() { _ = resp.Body.Close() }()

Apply the same pattern to lines 721 and 772.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
client := &http.Client{Timeout: timeout}
resp, err := client.Get(serverURL)
if err != nil {
result.Error = fmt.Sprintf("RDAP request failed: %s", err.Error())
result.LookupTimeMs = time.Since(start).Milliseconds()
return result
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
result.Error = fmt.Sprintf("failed to read response: %s", err.Error())
result.LookupTimeMs = time.Since(start).Milliseconds()
return result
}
result.RawJSON = string(body)
if resp.StatusCode != http.StatusOK {
result.Error = fmt.Sprintf("RDAP server returned status %d", resp.StatusCode)
result.LookupTimeMs = time.Since(start).Milliseconds()
return result
}
if err := json.Unmarshal(body, &result); err != nil {
result.Error = fmt.Sprintf("failed to parse RDAP response: %s", err.Error())
}
result.LookupTimeMs = time.Since(start).Milliseconds()
return result
}
client := &http.Client{Timeout: timeout}
resp, err := client.Get(serverURL)
if err != nil {
result.Error = fmt.Sprintf("RDAP request failed: %s", err.Error())
result.LookupTimeMs = time.Since(start).Milliseconds()
return result
}
defer func() { _ = resp.Body.Close() }()
body, err := io.ReadAll(resp.Body)
if err != nil {
result.Error = fmt.Sprintf("failed to read response: %s", err.Error())
result.LookupTimeMs = time.Since(start).Milliseconds()
return result
}
result.RawJSON = string(body)
if resp.StatusCode != http.StatusOK {
result.Error = fmt.Sprintf("RDAP server returned status %d", resp.StatusCode)
result.LookupTimeMs = time.Since(start).Milliseconds()
return result
}
if err := json.Unmarshal(body, &result); err != nil {
result.Error = fmt.Sprintf("failed to parse RDAP response: %s", err.Error())
}
result.LookupTimeMs = time.Since(start).Milliseconds()
return result
}
🧰 Tools
🪛 GitHub Actions: CI

[error] 666-666: golangci-lint error: dns_tools.go:666:23: Error return value of resp.Body.Close is not checked (errcheck).

🪛 GitHub Check: build-test-gonum

[failure] 666-666:
Error return value of resp.Body.Close is not checked (errcheck)

🪛 GitHub Check: build-test-wasm

[failure] 666-666:
Error return value of resp.Body.Close is not checked (errcheck)

🤖 Prompt for AI Agents
In dns_tools.go around lines 659 to 689, the resp.Body.Close() error is not
being checked which fails CI; replace the defer resp.Body.Close() with an
explicit close call where appropriate and check its error return (e.g., after
reading/parsing the body and at the other locations mentioned on lines ~666,
~721, ~772), and if Close returns an error set result.Error =
fmt.Sprintf("failed to close response body: %s", err.Error()), set
result.LookupTimeMs = time.Since(start).Milliseconds() and return result so the
error is handled consistently.

@Snider Snider merged commit e182d4f into main Dec 29, 2025
2 of 7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants