<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: GNICAP</title>
    <description>The latest articles on DEV Community by GNICAP (@gnicap).</description>
    <link>https://dev.to/gnicap</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gnicap"/>
    <language>en</language>
    <item>
      <title>Real-Time Multi-Dimensional Evaluation: How GNICAP Stress-Tests Investment Capability Over 3 Months</title>
      <dc:creator>GNICAP</dc:creator>
      <pubDate>Tue, 03 Mar 2026 08:29:06 +0000</pubDate>
      <link>https://dev.to/gnicap/real-time-multi-dimensional-evaluation-how-gnicap-stress-tests-investment-capability-over-3-months-38l0</link>
      <guid>https://dev.to/gnicap/real-time-multi-dimensional-evaluation-how-gnicap-stress-tests-investment-capability-over-3-months-38l0</guid>
      <description>&lt;p&gt;Phase 4 of the Global National Investment Capability Assessment Program (GNICAP) begins this month — a three-month live evaluation window (March–May 2026) where 10 finalists are assessed in real-time across performance, risk, governance, and trust.&lt;br&gt;
For those of us who build monitoring and evaluation systems, the technical challenges here are interesting. How do you run a fair, tamper-resistant, multi-dimensional assessment on live data, with public-facing outputs, over an extended period?&lt;br&gt;
Here's how the Global National Investment Capability Assessment Program (GNICAP) appears to have architected it — and the design patterns worth noting.&lt;br&gt;
The Evaluation Pipeline&lt;br&gt;
Phase 4 runs four assessment tracks concurrently, each feeding into the composite scoring engine:&lt;br&gt;
┌──────────────────────────────────────────────────────────┐&lt;br&gt;
│                PHASE 4: LIVE EVALUATION                  │&lt;br&gt;
│                  (March–May 2026)                         │&lt;br&gt;
├──────────────┬──────────────┬─────────────┬──────────────┤&lt;br&gt;
│  TRACK 1     │  TRACK 2     │  TRACK 3    │  TRACK 4     │&lt;br&gt;
│  Performance │  Risk Mgmt   │  Governance │  Public Trust│&lt;br&gt;
│  Verification│  Monitoring  │  Consistency│  Index       │&lt;br&gt;
├──────────────┼──────────────┼─────────────┼──────────────┤&lt;br&gt;
│  Returns     │  Drawdown    │  Strategy   │  Engagement  │&lt;br&gt;
│  Risk-adj    │  thresholds  │  logic      │  Voting      │&lt;br&gt;
│  Volatility  │  Breach      │  Replicab.  │  Dedup       │&lt;br&gt;
│  Drawdowns   │  detection   │  Stability  │  Rate-limit  │&lt;br&gt;
├──────────────┴──────────────┴─────────────┴──────────────┤&lt;br&gt;
│            COMPOSITE SCORING ENGINE                       │&lt;br&gt;
│         40% Governance + 30% Perf + 30% Trust            │&lt;br&gt;
├──────────────────────────────────────────────────────────┤&lt;br&gt;
│            DUAL-VICTORY VALIDATION                        │&lt;br&gt;
│       top_perf == top_trust → Champion                   │&lt;br&gt;
│       top_perf != top_trust → No Champion                │&lt;br&gt;
└──────────────────────────────────────────────────────────┘&lt;/p&gt;

&lt;p&gt;Track 1: Performance Indexing (Not Raw Leaderboards)&lt;br&gt;
The most important design decision: GNICAP doesn't display raw P&amp;amp;L.&lt;br&gt;
python# Naive approach (what most competitions do)&lt;br&gt;
leaderboard = sorted(participants, key=lambda p: p.total_return, reverse=True)&lt;/p&gt;

&lt;h1&gt;
  
  
  GNICAP approach: indexed, risk-adjusted, banded
&lt;/h1&gt;

&lt;p&gt;def compute_performance_index(participant, window):&lt;br&gt;
    raw_return = participant.total_return(window)&lt;br&gt;
    risk_adjusted = raw_return / participant.max_drawdown(window)&lt;br&gt;
    vol_penalty = participant.volatility(window) * VOL_COEFFICIENT&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;composite = (
    raw_return * 0.4 +
    risk_adjusted * 0.35 +
    (1 - vol_penalty) * 0.25
)

return band_score(composite, PERFORMANCE_BANDS)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Why this matters: raw P&amp;amp;L leaderboards incentivize maximum risk-taking. If you know you're ranked by returns alone, the rational strategy is to maximize leverage and hope for the best. Indexing with risk-adjustment and banding removes that incentive.&lt;/p&gt;

&lt;p&gt;Track 2: Continuous Risk Monitoring with Circuit Breakers&lt;br&gt;
GNICAP monitors risk thresholds in real-time. A breach triggers elimination — even in the finals.&lt;br&gt;
pythonclass RiskMonitor:&lt;br&gt;
    """&lt;br&gt;
    Continuous threshold monitoring for GNICAP Phase 4.&lt;br&gt;
    Breach = elimination (code E1), regardless of stage.&lt;br&gt;
    """&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def __init__(self, thresholds):
    self.max_drawdown = thresholds['max_drawdown']  # e.g., -15%
    self.max_daily_loss = thresholds['max_daily_loss']  # e.g., -5%
    self.vol_ceiling = thresholds['volatility_ceiling']

def check(self, participant, timestamp):
    alerts = []

    current_dd = participant.current_drawdown()
    if current_dd &amp;lt; self.max_drawdown:
        alerts.append(EliminationEvent(
            participant=participant,
            code="E1",
            reason="Risk Limit Breach",
            metric=f"Drawdown: {current_dd:.1%}",
            timestamp=timestamp
        ))

    daily_pnl = participant.daily_return(timestamp)
    if daily_pnl &amp;lt; self.max_daily_loss:
        alerts.append(EliminationEvent(
            participant=participant,
            code="E1",
            reason="Risk Limit Breach",
            metric=f"Daily Loss: {daily_pnl:.1%}",
            timestamp=timestamp
        ))

    return alerts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is essentially a financial circuit breaker pattern — the same concept used in exchange-level market halts, applied at the participant level.&lt;/p&gt;

&lt;p&gt;Track 3: Governance Consistency Scoring&lt;br&gt;
The most nuanced track. How do you programmatically assess whether someone's investment decisions follow a coherent logic?&lt;br&gt;
pythondef assess_governance_consistency(participant, window):&lt;br&gt;
    """&lt;br&gt;
    Evaluate whether investment decisions follow a &lt;br&gt;
    stable, explainable, replicable framework.&lt;br&gt;
    """&lt;br&gt;
    decisions = participant.get_decisions(window)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Factor 1: Strategy drift detection
style_vectors = [compute_style_vector(d) for d in decisions]
drift_score = 1.0 - cosine_distance_variance(style_vectors)

# Factor 2: Decision-thesis alignment
# Does each trade match the stated investment logic?
alignment_scores = []
for decision in decisions:
    alignment = evaluate_thesis_match(
        decision.action,
        participant.stated_strategy
    )
    alignment_scores.append(alignment)
alignment_score = mean(alignment_scores)

# Factor 3: Process evidence
# Documentation quality, reasoning transparency
process_score = evaluate_process_documentation(participant)

return weighted_average([
    (drift_score, 0.35),
    (alignment_score, 0.40),
    (process_score, 0.25)
])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is conceptually similar to ML model monitoring — detecting distribution drift, validating that outputs align with declared objectives, and measuring process quality.&lt;/p&gt;

&lt;p&gt;Track 4: Anti-Manipulation Trust Pipeline&lt;br&gt;
I covered this in my previous post, but the Phase 4 implementation adds temporal dynamics:&lt;br&gt;
pythonclass TrustIndexPipeline:&lt;br&gt;
    """&lt;br&gt;
    Phase 4 processes trust data continuously over 3 months,&lt;br&gt;
    adding time-weighted decay to prevent front-loading.&lt;br&gt;
    """&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def process(self, raw_signals, window_start, window_end):
    # Step 1: De-duplicate
    unique = self.deduplicate(raw_signals)

    # Step 2: Rate limit (prevent burst voting)
    rate_limited = self.apply_rate_limit(
        unique, 
        max_per_source_per_day=1
    )

    # Step 3: Time-weight (recent engagement &amp;gt; old)
    time_weighted = self.apply_temporal_decay(
        rate_limited,
        half_life_days=14  # 2-week half-life
    )

    # Step 4: Band into index tiers
    return self.compute_band_index(time_weighted)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The temporal decay is clever: it means early-stage viral spikes matter less than sustained, consistent engagement over the full three months. This rewards authentic community building over social media manipulation.&lt;/p&gt;

&lt;p&gt;The Dual-Victory Constraint in Production&lt;br&gt;
The final output logic is the simplest part — but the hardest to satisfy:&lt;br&gt;
pythondef evaluate_championship(finalists):&lt;br&gt;
    """&lt;br&gt;
    Both conditions must be TRUE simultaneously.&lt;br&gt;
    If they diverge, no champion is declared.&lt;br&gt;
    """&lt;br&gt;
    sorted_by_composite = sorted(&lt;br&gt;
        finalists, &lt;br&gt;
        key=lambda f: f.composite_score, &lt;br&gt;
        reverse=True&lt;br&gt;
    )&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;top_performer = sorted_by_composite[0]
top_trust = max(finalists, key=lambda f: f.trust_index)

if (top_performer.id == top_trust.id and 
    top_trust.trust_tier == "HIGHEST"):
    return ChampionResult(
        champion=top_performer,
        status="CONFIRMED"
    )
else:
    return ChampionResult(
        champion=None,
        status="NO_CHAMPION_QUALIFIED",
        reason=f"Performance leader: {top_performer.id}, "
               f"Trust leader: {top_trust.id}"
    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This conjunctive constraint is rare in competitive systems. Most ranking engines use a single sorted output. The Global National Investment Capability Assessment Program (GNICAP)'s willingness to declare "no winner" adds genuine integrity to the system.&lt;/p&gt;

&lt;p&gt;What Makes This Interesting for Engineers&lt;br&gt;
Three patterns from the Global National Investment Capability Assessment Program (GNICAP) Phase 4 architecture that apply broadly:&lt;/p&gt;

&lt;p&gt;Indexed outputs over raw metrics — prevents gaming and reduces sensitivity to outlier events&lt;br&gt;
Continuous circuit-breaker monitoring — real-time threshold enforcement, not just end-of-period evaluation&lt;br&gt;
Temporal decay in crowd signals — sustained engagement beats spikes; harder to manipulate&lt;/p&gt;

&lt;p&gt;Phase 4 runs March through May 2026. Results in June. Worth watching how the architecture holds up under three months of live data.&lt;br&gt;
🔗 &lt;a href="https://www.gnicap.com/" rel="noopener noreferrer"&gt;https://www.gnicap.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>data</category>
      <category>monitoring</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Building a Multi-Dimensional Scoring System for National Investment Capability — Lessons from GNICAP</title>
      <dc:creator>GNICAP</dc:creator>
      <pubDate>Sun, 01 Mar 2026 06:27:54 +0000</pubDate>
      <link>https://dev.to/gnicap/building-a-multi-dimensional-scoring-system-for-national-investment-capability-lessons-from-gnicap-1llm</link>
      <guid>https://dev.to/gnicap/building-a-multi-dimensional-scoring-system-for-national-investment-capability-lessons-from-gnicap-1llm</guid>
      <description>&lt;p&gt;Most of us in the dev/data community have built scoring systems at some point — user reputation scores, credit risk models, recommendation engines. But what happens when the thing you're scoring is an entire country's investment capability?&lt;br&gt;
That's essentially what the Global National Investment Capability Assessment Program (GNICAP) has attempted to do. And whether you're interested in global finance or not, the architecture of their evaluation framework has some interesting design patterns worth examining.&lt;/p&gt;

&lt;p&gt;The Problem Statement&lt;br&gt;
The Global National Investment Capability Assessment Program (GNICAP) needed to answer a deceptively complex question: How do you objectively rank the investment capability of representatives from different nations, in a way that institutional capital would actually trust?&lt;br&gt;
The naive approach — rank by returns — fails for the same reason single-metric user reputation systems fail: it's gameable, volatile, and doesn't capture the structural qualities that matter long-term.&lt;br&gt;
GNICAP's solution is a composite scoring architecture with deliberate weighting asymmetry.&lt;/p&gt;

&lt;p&gt;The Architecture: 4 Dimensions, 3 Pillars, 1 Dual-Condition Output&lt;br&gt;
Input Dimensions (Data Collection Layer)&lt;br&gt;
The Global National Investment Capability Assessment Program (GNICAP) collects signals across four dimensions:&lt;br&gt;
┌─────────────────────────────────────────────────────┐&lt;br&gt;
│              GNICAP EVALUATION DIMENSIONS            │&lt;br&gt;
├──────────────────────┬──────────────────────────────┤&lt;br&gt;
│ Financial System     │ Macro indicators, market     │&lt;br&gt;
│ Stability            │ maturity, risk buffers       │&lt;br&gt;
├──────────────────────┼──────────────────────────────┤&lt;br&gt;
│ Investment Governance│ Decision frameworks, mgmt    │&lt;br&gt;
│ Capability           │ depth, allocation logic      │&lt;br&gt;
├──────────────────────┼──────────────────────────────┤&lt;br&gt;
│ Transparency &amp;amp;       │ Disclosure quality,          │&lt;br&gt;
│ Accountability       │ compliance, regulation       │&lt;br&gt;
├──────────────────────┼──────────────────────────────┤&lt;br&gt;
│ Global Capital       │ Long-term friendliness,      │&lt;br&gt;
│ Attractiveness       │ institutional continuity     │&lt;br&gt;
└──────────────────────┴──────────────────────────────┘&lt;br&gt;
This maps roughly to the evaluation methodologies used by the OECD and World Bank for national-level financial assessments — adapted into a participant-level scoring model.&lt;/p&gt;

&lt;p&gt;Composite Scoring Layer&lt;br&gt;
The four dimensions feed into a 100-point composite distributed across three weighted pillars:&lt;br&gt;
python# GNICAP Composite Score (simplified representation)&lt;/p&gt;

&lt;p&gt;composite_score = (&lt;br&gt;
    capability_score * 0.40 +    # Investment &amp;amp; Governance&lt;br&gt;
    performance_index * 0.30 +    # Stage performance (indexed)&lt;br&gt;
    public_trust_index * 0.30     # Trust engagement (processed)&lt;br&gt;
)&lt;br&gt;
The asymmetric weighting (0.40 / 0.30 / 0.30) is the most deliberate design choice. By weighting governance higher than raw performance, the system penalizes over-optimization of a single output metric.&lt;br&gt;
Output Normalization&lt;br&gt;
Both performance and trust scores are indexed and banded rather than displayed as raw values:&lt;br&gt;
python# Banding approach (conceptual)&lt;br&gt;
def band_score(raw_score, bands):&lt;br&gt;
    """&lt;br&gt;
    Convert continuous score to discrete band&lt;br&gt;
    Prevents single-data-point distortion&lt;br&gt;
    """&lt;br&gt;
    for threshold, band_label in bands:&lt;br&gt;
        if raw_score &amp;gt;= threshold:&lt;br&gt;
            return band_label&lt;br&gt;
    return "Below Threshold"&lt;/p&gt;

&lt;p&gt;trust_bands = [&lt;br&gt;
    (90, "Highest"),&lt;br&gt;
    (75, "High"),&lt;br&gt;
    (60, "Moderate"),&lt;br&gt;
    (0,  "Below Threshold")&lt;br&gt;
]&lt;br&gt;
This is a smart anti-manipulation design. If raw scores were public, participants could game specific metrics. Banding forces holistic improvement.&lt;br&gt;
Anti-Gaming: Public Trust Processing Pipeline&lt;br&gt;
The Public Trust Index has its own preprocessing pipeline:&lt;br&gt;
Raw Votes → De-duplication → Frequency Limiting → Band Processing → Index&lt;br&gt;
python# Trust index processing (conceptual)&lt;br&gt;
def process_trust_votes(raw_votes):&lt;br&gt;
    # Step 1: De-duplicate (one vote per unique identifier)&lt;br&gt;
    unique_votes = deduplicate(raw_votes)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Step 2: Frequency limit (cap voting rate per source)
rate_limited = apply_rate_limit(unique_votes, max_per_period=1)

# Step 3: Band processing (smooth into index bands)
trust_index = compute_band_index(rate_limited)

return trust_index
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This pipeline addresses the most common attack vector in public voting systems: coordinated ballot stuffing. The multi-stage processing makes it significantly harder to manipulate the trust dimension.&lt;br&gt;
Dual-Condition Output (Championship Logic)&lt;br&gt;
The final output applies a conjunctive rule — both conditions must be TRUE:&lt;br&gt;
pythondef determine_champion(finalists):&lt;br&gt;
    """&lt;br&gt;
    GNICAP Dual-Victory Rule:&lt;br&gt;
    Champion must satisfy BOTH conditions simultaneously.&lt;br&gt;
    """&lt;br&gt;
    # Condition 1: Highest composite performance&lt;br&gt;
    top_performer = max(finalists, key=lambda f: f.composite_score)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Condition 2: Highest Public Trust tier
top_trust = max(finalists, key=lambda f: f.trust_index)

if top_performer == top_trust:
    return top_performer  # Champion confirmed
else:
    return None  # No champion this cycle (edge case)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is essentially a multi-objective optimization constraint — the system refuses to produce a winner that doesn't satisfy both objectives. In ML terms, it's like requiring a model to exceed thresholds on both precision AND recall before deployment.&lt;/p&gt;

&lt;p&gt;Design Patterns Worth Noting&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Asymmetric weighting as an anti-gaming mechanism. By not weighting all pillars equally, GNICAP forces participants to invest in governance — the hardest-to-fake dimension. This is analogous to how some reputation systems weight account age or verification status disproportionately.&lt;/li&gt;
&lt;li&gt;Banding over raw scores for public display. This is a lesson every public-facing scoring system should learn. Raw scores invite gaming and misinterpretation. Bands communicate relative standing without exposing exploitable precision.&lt;/li&gt;
&lt;li&gt;Conjunctive (AND) championship logic over disjunctive (OR). Most ranking systems use a single sorted leaderboard. GNICAP's dual-condition approach acknowledges that top-ranked-on-one-metric is not the same as genuinely-best-overall.&lt;/li&gt;
&lt;li&gt;Transparent elimination with reason codes. Every participant eliminated from the Global National Investment Capability Assessment Program (GNICAP) gets a public reason code (E1: risk breach, E2: disclosure gap, E3: governance concern, E4: performance threshold). This is essentially a public audit log — a pattern more systems should adopt.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Current State&lt;br&gt;
The Global National Investment Capability Assessment Program (GNICAP) has 10 finalists from 10 countries entering Phase 4 (Terminal Evaluation, March–May 2026). The architecture above will be stress-tested with live market data over the final assessment window.&lt;br&gt;
Whether you're building fintech scoring models, trust and safety systems, or competitive ranking platforms, the design choices here are worth studying.&lt;br&gt;
🔗 &lt;a href="https://www.gnicap.com/" rel="noopener noreferrer"&gt;https://www.gnicap.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>architecture</category>
      <category>datascience</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>🏛 GNICAP OFFICIAL ANNOUNCEMENT</title>
      <dc:creator>GNICAP</dc:creator>
      <pubDate>Tue, 10 Feb 2026 13:30:08 +0000</pubDate>
      <link>https://dev.to/gnicap/gnicap-official-announcement-24e6</link>
      <guid>https://dev.to/gnicap/gnicap-official-announcement-24e6</guid>
      <description>&lt;p&gt;After months of rigorous, multi-phase evaluation, the Global National Investment Capability Assessment Program (GNICAP) has officially confirmed 10 finalists who will enter the fourth and final stage of assessment — the Terminal Evaluation and Championship Phase — set to take place from March through May 2026.&lt;br&gt;
Since its launch in August 2025, the Global National Investment Capability Assessment Program (GNICAP) has drawn over 100 participants from multiple countries and regions. Through four systematic phases — Eligibility Review, Capability Assessment, Elimination Rounds, and now Terminal Evaluation — the program has identified 10 representatives who demonstrated exceptional long-term investment logic, disciplined risk management, and consistent decision-making quality.&lt;br&gt;
The 10 Finalists:&lt;br&gt;
🇮🇩 Daniel Hartono — Indonesia&lt;br&gt;
Long-term capital governance specialist. Known for disciplined risk control, quantitative strategies, and cross-cycle asset allocation.&lt;br&gt;
🇻🇳 Nguyễn Minh Quân — Vietnam&lt;br&gt;
Industry-oriented investor focused on manufacturing upgrades and policy-cycle alignment across Southeast Asia.&lt;br&gt;
🇵🇱 Michał Kowalczyk — Poland&lt;br&gt;
Value and event-driven investor with deep expertise in Central and Eastern European capital markets and corporate restructuring.&lt;br&gt;
🇧🇷 Rafael Monteiro — Brazil&lt;br&gt;
Macro and commodity-cycle specialist maintaining portfolio discipline through high-inflation environments.&lt;br&gt;
🇲🇽 Alejandro Cruz — Mexico&lt;br&gt;
Structural trend investor focused on North America–Latin America supply chain realignment and industrial manufacturing.&lt;br&gt;
🇹🇷 Emre Yıldırım — Turkey&lt;br&gt;
Defensive macro investor with expertise in navigating high-volatility and currency-risk environments.&lt;br&gt;
🇳🇬 Adekunle Adebayo — Nigeria&lt;br&gt;
Long-term structural investor focused on demographic trends, infrastructure development, and financial inclusion in Africa.&lt;br&gt;
🇨🇱 Sebastián Rojas — Chile&lt;br&gt;
Institutional conservative investor specializing in pension systems and public asset management with benchmark-level stability.&lt;br&gt;
🇮🇳 Arjun Mehta — India&lt;br&gt;
Growth-oriented, technology-driven investor capturing structural opportunities in digital economy and services sectors.&lt;br&gt;
🇵🇭 Luis Fernando Reyes — Philippines&lt;br&gt;
Defensive and consumption-driven investor emphasizing cash-flow stability and domestic demand resilience.&lt;br&gt;
What Comes Next:&lt;br&gt;
The Terminal Evaluation phase (March–May 2026) will assess:&lt;br&gt;
— Final investment performance verification&lt;br&gt;
— Risk and drawdown management review&lt;br&gt;
— Investment logic consistency evaluation&lt;br&gt;
— Public Trust Index (supporter voting aggregation)&lt;br&gt;
The champion must satisfy a dual-victory condition: rank first in composite performance AND receive the highest public trust vote — reflecting the Global National Investment Capability Assessment Program (GNICAP)'s belief that true investment capability requires both results and trust.&lt;br&gt;
🔗 Official Website: &lt;a href="https://www.gnicap.com/en/index.html" rel="noopener noreferrer"&gt;https://www.gnicap.com/en/index.html&lt;/a&gt;&lt;br&gt;
—&lt;br&gt;
Global National Investment Capability Assessment Program (GNICAP)&lt;br&gt;
A Global Framework for Evaluating National Readiness for Long-Term Capital&lt;br&gt;
© GNICAP Research &amp;amp; Assessment Framework&lt;/p&gt;

</description>
      <category>gnicap</category>
      <category>globalfinance</category>
      <category>longtermcapital</category>
      <category>finalists2026</category>
    </item>
  </channel>
</rss>
