-
Notifications
You must be signed in to change notification settings - Fork 0
feat: Expose networking analytics for KD-Tree NAT routing #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add comprehensive networking analytics to support peer selection based on performance and trust metrics for KD-Tree based NAT routing: - Add kdtree_analytics.go with TreeAnalytics, PeerAnalytics, DistributionStats, NATRoutingMetrics, TrustMetrics, and QualityWeights structures - Track query/insert/delete operations with timing statistics - Track per-peer selection frequency and average distances - Add PeerQualityScore() for composite peer ranking - Add ComputeTrustScore() for reputation-based selection - Add distribution statistics (min, max, mean, median, percentiles) - Add feature normalization helpers for multi-dimensional peer data WASM/TypeScript integration: - Expose all analytics via WASM bindings - Update TypeScript definitions with full type coverage - Update loader.js with new API methods - Update TypeScript demo to showcase analytics features Includes comprehensive test coverage for all analytics functionality.
Add comprehensive DNS tools module for network analysis: DNS Lookup functionality: - Support for A, AAAA, MX, TXT, NS, CNAME, SOA, PTR, SRV, CAA records - DNSLookup() and DNSLookupAll() for single/complete lookups - Configurable timeouts - Structured result types for all record types RDAP (new-style WHOIS) support: - RDAPLookupDomain() for domain registration data - RDAPLookupIP() for IP address information - RDAPLookupASN() for autonomous system info - Built-in server registry for common TLDs and RIRs - ParseRDAPResponse() for extracting key domain info External tool link generators: - GetExternalToolLinks() - 20+ links for domain analysis - GetExternalToolLinksIP() - IP-specific analysis tools - GetExternalToolLinksEmail() - Email/domain verification Tools include: MXToolbox (DNS, MX, SPF, DMARC, DKIM, blacklist), DNSChecker, ViewDNS, IntoDNS, DNSViz, SecurityTrails, SSL Labs, Shodan, Censys, IPInfo, AbuseIPDB, VirusTotal, and more. WASM bindings expose link generators and RDAP URL builders for use in TypeScript/browser environments.
- Add support for 13 additional record types: ALIAS, RP, SSHFP, TLSA, DS, DNSKEY, NAPTR, LOC, HINFO, CERT, SMIMEA, WR (Web Redirect), SPF - Add GetDNSRecordTypeInfo() for metadata with RFC references - Add GetCommonDNSRecordTypes() for commonly used types - Add structured types for CAA, SSHFP, TLSA, DS, DNSKEY, NAPTR, RP, LOC, ALIAS, and WebRedirect records - Export new functions in WASM bindings - Update TypeScript definitions and loader.js - Add comprehensive tests for new record types
📝 WalkthroughSummary by CodeRabbitRelease Notes
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughThe pull request introduces comprehensive DNS and RDAP tooling capabilities alongside extensive KDTree analytics tracking. New functionality includes DNS record type abstractions, domain and IP lookups with RDAP integration, external tool link generation, and peer-based analytics with quality and trust scoring, all exposed through WASM bindings for JavaScript consumption. Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant KDT as KDTree
participant TA as TreeAnalytics
participant PA as PeerAnalytics
App->>KDT: Nearest(query)
activate KDT
Note over KDT: Start timing
KDT->>TA: Record query start
KDT->>KDT: Perform nearest search
KDT->>PA: Record peer selection
deactivate KDT
KDT->>TA: Record query duration
activate TA
TA->>TA: Update query count,<br/>timing metrics
deactivate TA
KDT-->>App: Return result + timing
App->>KDT: KNearest(query, k)
activate KDT
KDT->>TA: Record query start
KDT->>KDT: Perform k-nearest search
loop For each k result
KDT->>PA: Record peer selection
end
KDT->>TA: Record query duration
deactivate KDT
activate PA
PA->>PA: Increment hit counts,<br/>update distances
deactivate PA
KDT-->>App: Return k results
App->>KDT: GetAnalyticsSnapshot()
activate KDT
KDT->>TA: Snapshot query stats
deactivate KDT
KDT-->>App: TreeAnalyticsSnapshot
App->>KDT: GetPeerStats()
activate KDT
KDT->>PA: Calculate stats per peer
deactivate KDT
KDT-->>App: []PeerStats sorted by frequency
sequenceDiagram
participant User as User / Client
participant DNS as DNS Resolver
participant RDAP as RDAP Server
participant Tools as External Tools
User->>DNS: DNSLookupAll(domain)
activate DNS
par A Lookup
DNS->>DNS: Lookup A records
and AAAA Lookup
DNS->>DNS: Lookup AAAA records
and MX Lookup
DNS->>DNS: Lookup MX records<br/>(sort by priority)
and NS Lookup
DNS->>DNS: Lookup NS records
and TXT Lookup
DNS->>DNS: Lookup TXT records
and SOA Lookup
DNS->>DNS: Lookup SOA record
and CNAME Lookup
DNS->>DNS: Lookup CNAME record
end
deactivate DNS
DNS-->>User: CompleteDNSLookup<br/>(all records + timing)
User->>RDAP: RDAPLookupDomain(domain)
activate RDAP
RDAP->>RDAP: Query RDAP endpoint
deactivate RDAP
RDAP-->>User: RDAPResponse<br/>(registrar, dates, status)
User->>DNS: ParseRDAPResponse(rdapResp)
activate DNS
DNS->>DNS: Extract registrar,<br/>dates, DNSSEC,<br/>nameservers
deactivate DNS
DNS-->>User: ParsedDomainInfo
User->>Tools: GetExternalToolLinks(domain)
activate Tools
Tools->>Tools: Generate URLs for<br/>MXToolbox, DNSChecker,<br/>WhoIs, SecurityTrails, etc.
deactivate Tools
Tools-->>User: ExternalToolLinks<br/>(mapped service URLs)
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @Snider, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the KD-Tree implementation by integrating a robust analytics framework and advanced peer selection mechanisms. The primary goal is to enable data-driven decisions for NAT routing, allowing for more intelligent peer selection based on performance, trust, and network characteristics. Additionally, a new set of DNS and network diagnostic tools has been introduced, expanding the utility of the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a comprehensive set of analytics and scoring mechanisms for peer selection in the KD-Tree, which is a great enhancement. The new features are well-tested and exposed through the WASM API with corresponding TypeScript definitions.
I have a few suggestions, mostly around maintainability and code clarity:
- The new file
kdtree_analytics.gois quite large and could be split into smaller, more focused files. - There are a few magic numbers in the scoring and feature generation logic that could be extracted into named constants.
- The new
dns_tools.gofile is a significant addition that broadens the PR's scope. It might be worth considering its placement in the long term.
Overall, this is a solid contribution with extensive new functionality.
| // GetCommonDNSRecordTypes returns only commonly used record types | ||
| func GetCommonDNSRecordTypes() []DNSRecordType { | ||
| info := GetDNSRecordTypeInfo() | ||
| result := make([]DNSRecordType, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The slice result is initialized with a size of 0 and then appended to in a loop. A more idiomatic and performant approach in Go is to pre-allocate the slice with a known capacity to avoid reallocations, especially when the final size can be estimated.
| result := make([]DNSRecordType, 0) | |
| result := make([]DNSRecordType, 0, len(info)) |
| func GetAllDNSRecordTypes() []DNSRecordType { | ||
| return []DNSRecordType{ | ||
| DNSRecordA, DNSRecordAAAA, DNSRecordCNAME, DNSRecordMX, DNSRecordTXT, | ||
| DNSRecordNS, DNSRecordSOA, DNSRecordPTR, DNSRecordSRV, DNSRecordCAA, | ||
| DNSRecordALIAS, DNSRecordRP, DNSRecordSSHFP, DNSRecordTLSA, DNSRecordDS, | ||
| DNSRecordDNSKEY, DNSRecordNAPTR, DNSRecordLOC, DNSRecordHINFO, DNSRecordCERT, | ||
| DNSRecordSMIMEA, DNSRecordSPF, DNSRecordWR, | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GetAllDNSRecordTypes returns a hardcoded list of record types. This is fragile and can get out of sync if GetDNSRecordTypeInfo is updated. To improve maintainability, this function should derive its list from GetDNSRecordTypeInfo.
func GetAllDNSRecordTypes() []DNSRecordType {
info := GetDNSRecordTypeInfo()
result := make([]DNSRecordType, len(info))
for i, r := range info {
result[i] = r.Type
}
return result
}| @@ -0,0 +1,700 @@ | |||
| package poindexter | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file is over 700 lines long and handles several distinct responsibilities, including tree operational analytics, peer selection analytics, statistical utilities, peer quality scoring, and feature normalization. For better maintainability and separation of concerns, consider splitting this into multiple, smaller files. For example:
tree_analytics.go(TreeAnalytics)peer_analytics.go(PeerAnalytics)peer_scoring.go(NATRoutingMetrics,TrustMetrics, scoring functions)peer_features.go(feature definitions, normalization)stats.go(DistributionStats)
| successRate := float64(t.SuccessfulTransactions) / float64(total) | ||
|
|
||
| // Volume confidence (more transactions = more confident) | ||
| volumeConfidence := 1 - 1/(1+float64(total)/10) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| recencyBonus := 0.0 | ||
| if !t.LastSuccessAt.IsZero() { | ||
| hoursSince := time.Since(t.LastSuccessAt).Hours() | ||
| recencyBonus = 0.1 * math.Exp(-hoursSince/168) // Decays over ~1 week |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| 100 - f.BandwidthMbps, // Invert: higher bandwidth = lower value (capped at 100) | ||
| f.PacketLossRate, | ||
| 100 - f.ConnectivityPct, // Invert: higher connectivity = lower value | ||
| 1 - f.NATScore, // Invert: higher NAT score = lower value |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The hardcoded value 100 is used to invert BandwidthMbps and ConnectivityPct. This assumes a maximum value of 100, which might not always hold true and is implicitly coupled with the ranges in DefaultPeerFeatureRanges. To make this more robust, consider defining this maximum value as a constant or deriving it from the feature range definitions to avoid inconsistencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
dns_tools_test.go (1)
709-731: Consider testing the actual RDAP URL builder functions.These tests verify URL format expectations but don't actually call the
buildRDAPDomainURL,buildRDAPIPURL, orbuildRDAPASNURLfunctions. The tests are tautological - they check that a manually constructed URL has the expected prefix.🔎 Suggested improvement
func TestBuildRDAPURLs(t *testing.T) { - // These test the URL structure, not actual lookups - - // Domain URL - domain := "example.com" - expectedDomainPrefix := "https://rdap.org/domain/" - if !strings.HasPrefix("https://rdap.org/domain/"+domain, expectedDomainPrefix) { - t.Error("Domain URL format is incorrect") - } - - // IP URL - ip := "8.8.8.8" - expectedIPPrefix := "https://rdap.org/ip/" - if !strings.HasPrefix("https://rdap.org/ip/"+ip, expectedIPPrefix) { - t.Error("IP URL format is incorrect") - } - - // ASN URL - asn := "15169" - expectedASNPrefix := "https://rdap.org/autnum/" - if !strings.HasPrefix("https://rdap.org/autnum/"+asn, expectedASNPrefix) { - t.Error("ASN URL format is incorrect") - } + // Test actual URL builder functions + domainURL := BuildRDAPDomainURL("example.com") + if domainURL != "https://rdap.org/domain/example.com" { + t.Errorf("Domain URL incorrect: %s", domainURL) + } + + ipURL := BuildRDAPIPURL("8.8.8.8") + if ipURL != "https://rdap.org/ip/8.8.8.8" { + t.Errorf("IP URL incorrect: %s", ipURL) + } + + asnURL := BuildRDAPASNURL("AS15169") + if asnURL != "https://rdap.org/autnum/15169" { + t.Errorf("ASN URL incorrect: %s", asnURL) + } }dns_tools.go (3)
202-212: Consider pre-allocating the slice capacity.The function iterates to filter common types but starts with zero capacity. Pre-allocating based on expected count would avoid reallocations.
🔎 Proposed optimisation
func GetCommonDNSRecordTypes() []DNSRecordType { info := GetDNSRecordTypeInfo() - result := make([]DNSRecordType, 0) + result := make([]DNSRecordType, 0, len(info)/2) // ~half are common for _, r := range info { if r.Common { result = append(result, r.Type) } } return result }
225-236: Minor formatting inconsistency.The
LookupTimeMsfield at line 233 has inconsistent spacing compared to other fields in the struct.🔎 Proposed fix
type DNSLookupResult struct { Domain string `json:"domain"` QueryType string `json:"queryType"` Records []DNSRecord `json:"records"` MXRecords []MXRecord `json:"mxRecords,omitempty"` SRVRecords []SRVRecord `json:"srvRecords,omitempty"` SOARecord *SOARecord `json:"soaRecord,omitempty"` - LookupTimeMs int64 `json:"lookupTimeMs"` + LookupTimeMs int64 `json:"lookupTimeMs"` Error string `json:"error,omitempty"` Timestamp time.Time `json:"timestamp"` }
696-744: Consider extracting common RDAP HTTP logic.The three RDAP lookup functions (
RDAPLookupDomainWithTimeout,RDAPLookupIPWithTimeout,RDAPLookupASNWithTimeout) share nearly identical HTTP request/response handling. Extracting this into a helper would reduce duplication and centralise error handling.🔎 Proposed helper function
func rdapHTTPGet(serverURL string, timeout time.Duration) ([]byte, error) { client := &http.Client{Timeout: timeout} resp, err := client.Get(serverURL) if err != nil { return nil, fmt.Errorf("RDAP request failed: %w", err) } defer func() { _ = resp.Body.Close() }() body, err := io.ReadAll(resp.Body) if err != nil { return nil, fmt.Errorf("failed to read response: %w", err) } if resp.StatusCode != http.StatusOK { return body, fmt.Errorf("RDAP server returned status %d", resp.StatusCode) } return body, nil }Each RDAP function could then call this helper and handle the result consistently.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
dns_tools.godns_tools_test.goexamples/wasm-browser-ts/src/main.tskdtree.gokdtree_analytics.gokdtree_analytics_test.gonpm/poindexter-wasm/index.d.tsnpm/poindexter-wasm/loader.jswasm/main.go
🧰 Additional context used
🧬 Code graph analysis (5)
examples/wasm-browser-ts/src/main.ts (1)
npm/poindexter-wasm/smoke.mjs (3)
tree(16-16)px(8-12)nn(19-19)
kdtree_analytics_test.go (1)
kdtree_analytics.go (17)
NewTreeAnalytics(30-36)NewPeerAnalytics(144-150)ComputeDistributionStats(288-343)ComputeAxisDistributions(371-396)NATRoutingMetrics(399-420)NATTypeOpen(426-426)PeerQualityScore(441-472)QualityWeights(475-485)DefaultQualityWeights(494-506)TrustMetrics(535-554)ComputeTrustScore(557-587)StandardPeerFeatures(615-624)DefaultPeerFeatureRanges(661-674)NormalizePeerFeatures(677-687)WeightedPeerFeatures(690-700)StandardFeatureLabels(642-653)PeerAnalytics(132-141)
npm/poindexter-wasm/index.d.ts (2)
kdtree_analytics.go (10)
TreeAnalytics(13-27)PeerStats(262-267)DistributionStats(270-285)AxisDistribution(364-368)NATTypeClassification(423-423)NATRoutingMetrics(399-420)QualityWeights(475-485)TrustMetrics(535-554)FeatureRanges(656-658)StandardPeerFeatures(615-624)dns_tools.go (24)
DNSRecordType(21-21)CAARecord(86-90)SSHFPRecord(93-97)TLSARecord(100-105)DSRecord(108-113)DNSKEYRecord(116-121)NAPTRRecord(124-131)RPRecord(134-137)LOCRecord(140-147)ALIASRecord(150-152)WebRedirectRecord(155-159)ExternalToolLinks(802-848)RDAPEvent(544-548)RDAPEntity(551-559)RDAPLink(574-579)RDAPRemark(582-586)RDAPResponse(509-541)ParsedDomainInfo(951-960)DNSLookupResult(226-236)DNSRecord(53-58)MXRecord(61-64)SRVRecord(67-72)SOARecord(75-83)CompleteDNSLookup(239-251)
kdtree_analytics.go (2)
kdtree.go (1)
KDPoint(26-30)kdtree_helpers.go (1)
AxisStats(22-25)
dns_tools.go (2)
npm/poindexter-wasm/index.d.ts (10)
DNSRecordType(197-222)DNSRecordTypeInfo(225-231)DNSRecord(465-470)MXRecord(473-476)SRVRecord(479-484)SOARecord(487-495)DNSLookupResult(452-462)RDAPResponse(405-437)ExternalToolLinks(310-355)ParsedDomainInfo(440-449)poindexter.go (1)
Version(5-7)
🪛 GitHub Actions: CI
dns_tools.go
[error] 666-666: golangci-lint error: dns_tools.go:666:23: Error return value of resp.Body.Close is not checked (errcheck).
🪛 GitHub Check: build-test-gonum
dns_tools.go
[failure] 772-772:
Error return value of resp.Body.Close is not checked (errcheck)
[failure] 721-721:
Error return value of resp.Body.Close is not checked (errcheck)
[failure] 666-666:
Error return value of resp.Body.Close is not checked (errcheck)
🪛 GitHub Check: build-test-wasm
dns_tools.go
[failure] 772-772:
Error return value of resp.Body.Close is not checked (errcheck)
[failure] 721-721:
Error return value of resp.Body.Close is not checked (errcheck)
[failure] 666-666:
Error return value of resp.Body.Close is not checked (errcheck)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-test-wasm
🔇 Additional comments (51)
kdtree_analytics_test.go (7)
1-7: LGTM - Clean test file setup.The package declaration and imports are appropriate for a comprehensive test suite covering analytics functionality.
13-51: LGTM - Solid coverage of TreeAnalytics basic operations.Tests correctly validate initial state, query counting, timing aggregation, and min/max tracking.
53-106: LGTM - Snapshot and reset behaviour well validated.The tests appropriately verify that snapshots capture the correct point-in-time data and that reset clears all counters.
112-195: LGTM - Comprehensive PeerAnalytics test coverage.Tests cover the full lifecycle of peer analytics including selection recording, average distance calculation with appropriate floating-point tolerance, sorting by selection count, and reset behaviour.
201-267: LGTM - Distribution statistics tests are mathematically correct.The tests properly validate edge cases (empty, single value) and verify mean, median, variance, and percentiles with appropriate tolerances.
273-468: LGTM - NAT routing and trust scoring tests provide good boundary coverage.The tests effectively validate quality scoring for extreme cases (perfect vs poor peers) and trust calculations for different peer histories (new, good reputation, bad reputation).
474-662: LGTM - Feature normalisation and KD-Tree integration tests are thorough.The integration tests validate the complete analytics workflow including query tracking, peer selection recording, and importantly verify that
Points()returns a defensive copy.kdtree.go (8)
222-226: LGTM - Analytics fields added to KDTree struct.The analytics and peerAnalytics fields are appropriately added as pointer types, allowing nil checks for optional disabling.
266-277: LGTM - Analytics initialised in NewKDTree constructor.Analytics trackers are properly initialised alongside other tree fields.
293-303: LGTM - Analytics initialised in NewKDTreeFromDim constructor.Consistent initialisation pattern with the primary constructor.
317-351: LGTM - Nearest method properly instrumented.The defer pattern correctly captures query duration, and peer selections are recorded for both backend paths.
360-404: LGTM - KNearest method properly instrumented.Query timing and per-neighbour selection recording are correctly implemented for both backend paths.
411-457: LGTM - Radius method properly instrumented.Consistent instrumentation pattern with Nearest and KNearest methods.
474-529: LGTM - Insert and Delete operations tracked correctly.Both mutation operations and backend rebuilds are properly recorded in analytics.
531-593: LGTM - Well-designed public analytics API.The API correctly handles nil analytics, returns defensive copies where appropriate (Points()), and provides both raw access and snapshot methods for different use cases.
examples/wasm-browser-ts/src/main.ts (4)
16-34: LGTM - Basic KD-Tree operations demo.Clear demonstration of tree creation, insertion, and query operations with descriptive peer IDs.
35-77: LGTM - Analytics demonstration is comprehensive.The demo effectively showcases tree analytics, peer selection statistics, and axis distributions.
78-138: LGTM - NAT routing and trust score demonstrations.The demo provides realistic example metrics and correctly uses the peer quality and trust scoring APIs.
139-169: LGTM - Feature normalisation and analytics reset demonstration.The demo completes the analytics workflow by showing feature normalisation, weighting, and the reset functionality.
dns_tools_test.go (7)
1-6: LGTM - Clean test file setup.Appropriate package and minimal imports for a unit test file.
12-130: LGTM - External tool links tests provide good coverage.Tests validate link generation for domains, IPs, and emails with appropriate URL structure checks.
136-287: LGTM - DNS record type tests are comprehensive.Tests validate both standard and extended record types, type metadata, and the distinction between common and all types.
289-340: LGTM - DNS lookup structure tests validate type definitions.The tests confirm that the DNS lookup result structures can be properly instantiated and accessed.
346-461: LGTM - RDAP tests validate structure and parsing.Tests cover response structure, event parsing, registrar extraction, DNSSEC detection, and server availability for common TLDs/RIRs.
467-666: LGTM - Record type structure tests are thorough.Each DNS record type structure (MX, SRV, SOA, CAA, SSHFP, TLSA, DS, NAPTR, RP, LOC, ALIAS, WebRedirect) is validated.
672-703: LGTM - Helper function tests are well-structured.The
isNoSuchHostErrortests use a table-driven approach with good coverage of expected and unexpected error patterns.npm/poindexter-wasm/loader.js (2)
41-58: LGTM - PxTree class extended with analytics methods.The analytics methods are cleanly added with consistent patterns and appropriate parameter passing.
87-115: LGTM - API surface properly extended.The new API methods are well-organised into logical groups (core, statistics, NAT routing, DNS tools) and align with the WASM exports.
npm/poindexter-wasm/index.d.ts (5)
18-68: LGTM - Analytics type definitions are well-structured.The TypeScript interfaces for TreeAnalytics, PeerStats, DistributionStats, and AxisDistribution accurately reflect the Go implementations with appropriate JSDoc comments.
70-157: LGTM - NAT routing and peer quality types are comprehensive.The type definitions for NAT classification, routing metrics, quality weights, trust metrics, and feature handling accurately mirror the Go implementations.
163-180: LGTM - PxTree interface properly extended.The analytics methods are added with correct parameter and return types, maintaining consistency with the loader implementation.
192-510: LGTM - DNS and RDAP type definitions are comprehensive.The types cover all DNS record types (standard and extended), RDAP response structures, and external tool links with appropriate optional fields marked.
516-546: LGTM - PxAPI interface comprehensively extended.The interface correctly declares all new API methods with appropriate parameter and return types, maintaining type safety across the WASM boundary.
wasm/main.go (5)
218-232: LGTM - exportJSON enhanced to include all points and backend info.The export now provides a complete tree representation including dimension, length, backend type, and all points.
234-364: LGTM - Analytics WASM bindings are well-implemented.The functions correctly handle tree ID lookups, marshal analytics data to JS-compatible maps, and properly convert timestamps to UnixMilli for JavaScript Date compatibility.
366-528: LGTM - Statistics and peer quality bindings are correctly implemented.The functions properly extract metrics from JS objects, handle optional parameters, and return appropriately marshalled results.
534-700: LGTM - DNS tools WASM bindings are comprehensive.The external tool links conversion helper correctly maps all fields, RDAP URL builders include ASN prefix normalisation, and DNS record type info is properly converted to JS arrays.
702-746: LGTM - All new functions properly exported.The exports are well-organised into logical groups and use consistent naming with the
pxprefix.kdtree_analytics.go (7)
1-9: LGTM - Clean file setup with appropriate imports.The imports correctly include sync for mutex, sync/atomic for lock-free counters, and math for calculations.
11-114: LGTM - TreeAnalytics implementation is well-designed.The use of atomic operations for counters and CAS loops for min/max tracking is appropriate for analytics. The "best-effort" approach for min/max is documented and acceptable for non-critical metrics.
131-259: LGTM - PeerAnalytics concurrency handling is correct.The double-checked locking pattern for map entry creation and atomic float addition via CAS on bit representation are valid concurrent programming techniques.
269-361: LGTM - Distribution statistics implementation is mathematically sound.The implementation correctly uses population variance (appropriate for full dataset analytics), and the percentile calculation with linear interpolation follows standard practice.
398-532: LGTM - NAT routing and peer quality scoring are well-designed.The quality scoring normalises metrics appropriately, applies configurable weights, and clamps the final score to [0,1]. The NAT type scoring provides sensible rankings for different NAT traversal difficulties.
534-587: LGTM - Trust scoring algorithm is well-designed.The algorithm appropriately handles new peers, combines multiple trust factors with sensible caps, and uses exponential decay for recency bonuses.
605-700: LGTM — Feature handling implementation is correct.The feature inversion logic appropriately transforms "higher is better" metrics for distance-based KD-Tree queries. The
scale01helper function (defined inkdtree_helpers.go) is correctly used for normalisation. Normalisation and weighting functions are straightforward and correct.dns_tools.go (6)
1-14: LGTM!Clean import set using only standard library packages. All imports are utilised throughout the file.
23-50: LGTM!DNS record type constants are well-documented and align with the TypeScript definitions in
index.d.ts.
52-159: LGTM!Struct definitions are well-organised with appropriate JSON tags and inline documentation for field semantics.
915-944: LGTM!Email parsing logic correctly extracts the domain portion. The fallback to treating the input as a domain when no
@is present is sensible.
962-1006: LGTM!The parsing logic correctly extracts key domain information from RDAP responses. Event action strings align with common RDAP conventions.
367-386: The existing comment on line 368 already documents that SRV records require the full service and protocol prefix in the domain name (e.g.,_http._tcp.example.com). The implementation using empty strings for service and protocol toLookupSRVis correct per Go's net package design, which looks up the name directly when both are empty—this is intentional to support non-standard SRV publishing. The expectation is documented.
| client := &http.Client{Timeout: timeout} | ||
| resp, err := client.Get(serverURL) | ||
| if err != nil { | ||
| result.Error = fmt.Sprintf("RDAP request failed: %s", err.Error()) | ||
| result.LookupTimeMs = time.Since(start).Milliseconds() | ||
| return result | ||
| } | ||
| defer resp.Body.Close() | ||
|
|
||
| body, err := io.ReadAll(resp.Body) | ||
| if err != nil { | ||
| result.Error = fmt.Sprintf("failed to read response: %s", err.Error()) | ||
| result.LookupTimeMs = time.Since(start).Milliseconds() | ||
| return result | ||
| } | ||
|
|
||
| result.RawJSON = string(body) | ||
|
|
||
| if resp.StatusCode != http.StatusOK { | ||
| result.Error = fmt.Sprintf("RDAP server returned status %d", resp.StatusCode) | ||
| result.LookupTimeMs = time.Since(start).Milliseconds() | ||
| return result | ||
| } | ||
|
|
||
| if err := json.Unmarshal(body, &result); err != nil { | ||
| result.Error = fmt.Sprintf("failed to parse RDAP response: %s", err.Error()) | ||
| } | ||
|
|
||
| result.LookupTimeMs = time.Since(start).Milliseconds() | ||
| return result | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unchecked resp.Body.Close error is failing CI.
The linter requires the error return value to be handled. This pattern repeats at lines 666, 721, and 772.
🔎 Proposed fix for RDAPLookupDomainWithTimeout
client := &http.Client{Timeout: timeout}
resp, err := client.Get(serverURL)
if err != nil {
result.Error = fmt.Sprintf("RDAP request failed: %s", err.Error())
result.LookupTimeMs = time.Since(start).Milliseconds()
return result
}
- defer resp.Body.Close()
+ defer func() { _ = resp.Body.Close() }()Apply the same pattern to lines 721 and 772.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| client := &http.Client{Timeout: timeout} | |
| resp, err := client.Get(serverURL) | |
| if err != nil { | |
| result.Error = fmt.Sprintf("RDAP request failed: %s", err.Error()) | |
| result.LookupTimeMs = time.Since(start).Milliseconds() | |
| return result | |
| } | |
| defer resp.Body.Close() | |
| body, err := io.ReadAll(resp.Body) | |
| if err != nil { | |
| result.Error = fmt.Sprintf("failed to read response: %s", err.Error()) | |
| result.LookupTimeMs = time.Since(start).Milliseconds() | |
| return result | |
| } | |
| result.RawJSON = string(body) | |
| if resp.StatusCode != http.StatusOK { | |
| result.Error = fmt.Sprintf("RDAP server returned status %d", resp.StatusCode) | |
| result.LookupTimeMs = time.Since(start).Milliseconds() | |
| return result | |
| } | |
| if err := json.Unmarshal(body, &result); err != nil { | |
| result.Error = fmt.Sprintf("failed to parse RDAP response: %s", err.Error()) | |
| } | |
| result.LookupTimeMs = time.Since(start).Milliseconds() | |
| return result | |
| } | |
| client := &http.Client{Timeout: timeout} | |
| resp, err := client.Get(serverURL) | |
| if err != nil { | |
| result.Error = fmt.Sprintf("RDAP request failed: %s", err.Error()) | |
| result.LookupTimeMs = time.Since(start).Milliseconds() | |
| return result | |
| } | |
| defer func() { _ = resp.Body.Close() }() | |
| body, err := io.ReadAll(resp.Body) | |
| if err != nil { | |
| result.Error = fmt.Sprintf("failed to read response: %s", err.Error()) | |
| result.LookupTimeMs = time.Since(start).Milliseconds() | |
| return result | |
| } | |
| result.RawJSON = string(body) | |
| if resp.StatusCode != http.StatusOK { | |
| result.Error = fmt.Sprintf("RDAP server returned status %d", resp.StatusCode) | |
| result.LookupTimeMs = time.Since(start).Milliseconds() | |
| return result | |
| } | |
| if err := json.Unmarshal(body, &result); err != nil { | |
| result.Error = fmt.Sprintf("failed to parse RDAP response: %s", err.Error()) | |
| } | |
| result.LookupTimeMs = time.Since(start).Milliseconds() | |
| return result | |
| } |
🧰 Tools
🪛 GitHub Actions: CI
[error] 666-666: golangci-lint error: dns_tools.go:666:23: Error return value of resp.Body.Close is not checked (errcheck).
🪛 GitHub Check: build-test-gonum
[failure] 666-666:
Error return value of resp.Body.Close is not checked (errcheck)
🪛 GitHub Check: build-test-wasm
[failure] 666-666:
Error return value of resp.Body.Close is not checked (errcheck)
🤖 Prompt for AI Agents
In dns_tools.go around lines 659 to 689, the resp.Body.Close() error is not
being checked which fails CI; replace the defer resp.Body.Close() with an
explicit close call where appropriate and check its error return (e.g., after
reading/parsing the body and at the other locations mentioned on lines ~666,
~721, ~772), and if Close returns an error set result.Error =
fmt.Sprintf("failed to close response body: %s", err.Error()), set
result.LookupTimeMs = time.Since(start).Milliseconds() and return result so the
error is handled consistently.
Add comprehensive networking analytics to support peer selection based
on performance and trust metrics for KD-Tree based NAT routing:
DistributionStats, NATRoutingMetrics, TrustMetrics, and
QualityWeights structures
WASM/TypeScript integration:
Includes comprehensive test coverage for all analytics functionality.