DEV Community

NexGenData
NexGenData

Posted on

The Complete Guide to Scraping Real Estate Data from Redfin in 2026

The Complete Guide to Scraping Real Estate Data from Redfin in 2026

Real estate data is one of the most valuable commodities in the modern economy. Whether you're an investor analyzing deal flow, a developer building a property analytics platform, or a data analyst tracking market trends, accurate and timely real estate data is essential to your success. Yet despite Redfin's position as one of America's largest real estate platforms with millions of listings, there's no public API for accessing their data at scale.

This gap leaves real estate professionals with a difficult choice: either pay premium prices for traditional data brokers, or manually collect information property by property. There's a third option, though—and it's far more efficient and cost-effective.

In this guide, I'll walk you through everything you need to know about scraping Redfin data in 2026, including practical code examples, extraction strategies, and how to build intelligent real estate workflows with the Real Estate MCP server. By the end, you'll understand exactly how to automate your real estate data collection pipeline.

Why Real Estate Data Matters More Than Ever

Before diving into the technical aspects of how to scrape Redfin data, let's establish why this matters.

Real estate represents the largest asset class in the world, with properties transacting daily based on market conditions that change by the hour. For professional real estate investors, every percentage point of market intelligence translates directly to portfolio returns. A comparative market analysis (CMA) that takes three hours to build manually can be generated in three minutes with automated data collection. A rent-versus-buy calculation that requires researching 30 properties individually becomes a data-driven decision across hundreds of markets when you scrape Redfin data systematically.

Key statistics:

  • Real estate investors analyze an average of 40-60 properties before making an offer
  • Market data more than 30 days old is considered stale in competitive markets
  • Accurate pricing data can improve investment returns by 3-5%
  • Automating data collection reduces research time by 80%+

The challenge is that Redfin—despite being a major listing source—doesn't offer direct API access to their property data. This forces professionals to either pay thousands monthly for aggregated data services, or build their own collection infrastructure. For many, automated scraping of Redfin data has become essential.

The Challenge: Why Redfin Doesn't Have a Public API

Redfin's lack of a public API isn't accidental. Real estate agents, Multiple Listing Services (MLS), and brokers have complex contractual relationships around data usage. Redfin aggregates data from thousands of MLS services and displays it to consumers, but the rights to redistribute that data programmatically are restricted. This creates a genuine technical and legal complexity that affects any effort to scrape Redfin data.

However, the data that appears on Redfin.com is publicly available to anyone visiting the site in a browser. This distinction is important: scraping publicly visible web pages for data aggregation is a legitimate practice, widely established in the industry, and fundamentally different from unauthorized API access.

The practical solution is browser-based scraping—using automation tools to extract data that's already publicly displayed. This is the approach we'll cover in this guide.

What Data Can You Extract When You Scrape Redfin?

Understanding what's available is crucial before building your scraping pipeline. Here's the comprehensive data you can extract:

Property Core Data:

  • Property address, ZIP code, and coordinates
  • Listing price (current and historical)
  • Estimated Zestimate/Redfin estimate
  • Days on market
  • Property type (single family, condo, townhouse, etc.)
  • Year built and renovation history
  • Square footage and lot size
  • Beds, baths, and half-baths
  • HOA fees and property taxes
  • MLS number

Pricing & Market Data:

  • Sale price history (entire transaction history)
  • Price per square foot trends
  • Rent estimates
  • Tax assessment history
  • Property value history (Zestimate trends)
  • Pending sales and list price changes

Property Details:

  • Lot dimensions
  • Garage spaces and type
  • Basement information
  • Heating and cooling systems
  • Roof type and age
  • Construction materials
  • Special features (pool, fireplace, etc.)

Agent & Listing Information:

  • Listing agent name and contact
  • Brokerage information
  • Agent ratings and review count
  • Time on market for this agent
  • Open house schedules

Neighborhood & Market Context:

  • School ratings and nearby schools
  • Walk score and transit options
  • Crime statistics
  • Neighborhood median prices
  • Demographic information
  • Local amenities

This comprehensive dataset enables sophisticated analysis that would take days to collect manually.

Understanding the Technical Approach

When you scrape Redfin data, you're essentially automating what a human would do visiting the site in a web browser:

  1. Navigate to search results for a specific market/criteria
  2. Scroll through or paginate through listings
  3. Click into individual properties
  4. Extract visible information
  5. Compile into a structured dataset

This is fundamentally different from breaking into a system—you're accessing publicly available information through the normal web interface, just at scale and automatically.

The most reliable approach uses browser automation (Playwright, Puppeteer) rather than raw HTTP requests, because Redfin's interface uses JavaScript heavily. This means you need to actually render the page, wait for dynamic content to load, and then extract data—exactly as a human browser would.

Step-by-Step: Setting Up Your Redfin Data Scraping Pipeline

Step 1: Choose Your Tooling

For JavaScript/Node.js environments, the Apify SDK provides the most robust foundation for scraping Redfin data. The SDK handles:

  • Browser automation with Playwright
  • Proxy rotation to avoid blocks
  • Intelligent request queuing
  • Data storage and export
  • Scheduler integration
  • Error handling and retries

Step 2: Define Your Scraping Scope

Before writing code, define exactly what you need:

  • Geographic scope (ZIP codes, neighborhoods, cities)
  • Property type filters (single family, condos, etc.)
  • Price ranges
  • Update frequency (daily, weekly, monthly)
  • Specific data fields needed

Step 3: Structure Your Data Schema

Design your output JSON structure before scraping. Here's a practical example:

{
  "property_id": "2184523945",
  "address": "1245 Oak Street, San Francisco, CA 94107",
  "latitude": 37.7749,
  "longitude": -122.4194,
  "price": {
    "current_listing": 1250000,
    "estimated_value": 1265000,
    "price_per_sqft": 850,
    "last_sale_price": 1100000,
    "last_sale_date": "2024-03-15"
  },
  "property_details": {
    "type": "Single Family",
    "beds": 4,
    "baths": 2.5,
    "sqft": 1470,
    "lot_sqft": 2850,
    "year_built": 1962,
    "garage_spaces": 2,
    "pool": false
  },
  "market_data": {
    "days_on_market": 18,
    "neighborhood_median": 1320000,
    "price_trend": "stable",
    "rent_estimate": 5200
  },
  "taxes_and_fees": {
    "annual_property_tax": 3450,
    "hoa_fee_monthly": 0,
    "tax_assessed_value": 1200000
  },
  "listing_agent": {
    "name": "Sarah Chen",
    "company": "Redfin",
    "rating": 4.9,
    "reviews": 127
  },
  "history": [
    {
      "date": "2024-03-15",
      "event": "Sold",
      "price": 1100000
    },
    {
      "date": "2024-01-20",
      "event": "Listed",
      "price": 1095000
    }
  ],
  "scraped_at": "2026-04-01T14:32:00Z",
  "source": "redfin.com"
}
Enter fullscreen mode Exit fullscreen mode

Code Tutorial: Scraping Redfin with Apify SDK

Here's a practical implementation to scrape Redfin data using the Apify SDK:

Basic Scraper Structure

const Apify = require('apify');

Apify.main(async () => {
    // Get input configuration
    const input = await Apify.getInput();
    const {
        searchUrl = 'https://www.redfin.com/city/10519/CA/San-Francisco',
        maxListings = 100,
        proxyGroup = 'RESIDENTIAL'
    } = input;

    // Initialize request list
    const requestList = await Apify.openRequestList('REDFIN-LIST', [
        {
            url: searchUrl,
            userData: { label: 'LIST' }
        }
    ]);

    // Create dataset for results
    const dataset = await Apify.openDataset('redfin-properties');

    // Configure crawler
    const crawler = new Apify.PuppeteerCrawler({
        requestList,
        navigationTimeoutSecs: 60,
        useSessionPool: true,
        sessionPoolOptions: {
            maxPoolSize: 10,
        },
        // Proxy configuration
        proxyConfiguration: await Apify.createProxyConfiguration({
            groups: [proxyGroup],
        }),
        launchContext: {
            launchOptions: {
                headless: true,
            },
        },
        handlePageFunction: async ({ page, request, session }) => {
            console.log(`Processing: ${request.url}`);

            if (request.userData.label === 'LIST') {
                // Handle search results page
                await handleListPage(page, request, dataset, crawler);
            } else if (request.userData.label === 'DETAIL') {
                // Handle individual property details
                await handleDetailPage(page, request, dataset);
            }
        },
        errorHandler: async ({ request, error }) => {
            console.log(`Error processing ${request.url}: ${error.message}`);
            // Log error for monitoring
            await Apify.pushData({
                '#debug': request.url,
                error: error.message,
                timestamp: new Date()
            });
        },
    });

    // Run crawler
    await crawler.run();
    console.log('Scraping completed!');
});
Enter fullscreen mode Exit fullscreen mode

Handling Search Results Pages

async function handleListPage(page, request, dataset, crawler) {
    // Wait for listings to load
    await page.waitForSelector('[data-testid="property-card"]', { timeout: 10000 });

    // Extract all property links from current page
    const propertyUrls = await page.evaluate(() => {
        const cards = document.querySelectorAll('[data-testid="property-card"]');
        const urls = [];

        cards.forEach(card => {
            const linkElement = card.querySelector('a[href*="/homes/"]');
            if (linkElement) {
                const href = linkElement.getAttribute('href');
                if (href && !href.includes('#')) {
                    urls.push(href);
                }
            }
        });

        return urls;
    });

    console.log(`Found ${propertyUrls.length} properties on this page`);

    // Queue detail pages for scraping
    for (const url of propertyUrls) {
        const fullUrl = new URL(url, page.url()).href;
        await crawler.requestList.addRequest({
            url: fullUrl,
            userData: { label: 'DETAIL' }
        });
    }

    // Check for next page
    const nextPageButton = await page.$('a[aria-label="Next page"]');
    if (nextPageButton) {
        const nextUrl = await page.evaluate(
            (el) => el.getAttribute('href'),
            nextPageButton
        );
        if (nextUrl) {
            await crawler.requestList.addRequest({
                url: new URL(nextUrl, page.url()).href,
                userData: { label: 'LIST' }
            });
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Extracting Property Details

async function handleDetailPage(page, request, dataset) {
    // Wait for main content to load
    await page.waitForSelector('[data-testid="property-details"]', { timeout: 10000 });

    // Extract property data
    const propertyData = await page.evaluate(() => {
        const getText = (selector) => {
            const el = document.querySelector(selector);
            return el ? el.textContent.trim() : null;
        };

        const getAllText = (selector) => {
            const els = document.querySelectorAll(selector);
            return Array.from(els).map(el => el.textContent.trim());
        };

        // Extract basic price information
        const priceText = getText('[data-testid="price"]');
        const priceMatch = priceText?.match(/\$[\d,]+/);
        const price = priceMatch ? parseInt(priceMatch[0].replace(/\D/g, '')) : null;

        // Extract property characteristics
        const beds = getText('[data-testid="bed-count"]')?.match(/\d+/)?.[0];
        const baths = getText('[data-testid="bath-count"]')?.match(/[\d.]+/)?.[0];
        const sqft = getText('[data-testid="sqft"]')?.match(/\d+/)?.[0];
        const lotSize = getText('[data-testid="lot-size"]')?.match(/\d+/)?.[0];

        // Extract address
        const address = getText('[data-testid="property-address"]');

        // Extract price per sqft
        const pricePerSqft = getText('[data-testid="price-per-sqft"]')?.match(/\d+/)?.[0];

        // Extract days on market
        const domText = getText('[data-testid="dom"]');
        const domMatch = domText?.match(/\d+/)?.[0];

        // Extract history
        const history = [];
        const historyRows = document.querySelectorAll('[data-testid="history-row"]');
        historyRows.forEach(row => {
            const dateText = row.querySelector('[data-testid="history-date"]')?.textContent.trim();
            const eventText = row.querySelector('[data-testid="history-event"]')?.textContent.trim();
            const priceText = row.querySelector('[data-testid="history-price"]')?.textContent.trim();

            if (dateText && eventText) {
                history.push({
                    date: dateText,
                    event: eventText,
                    price: priceText ? parseInt(priceText.replace(/\D/g, '')) : null
                });
            }
        });

        // Extract agent information
        const agentName = getText('[data-testid="agent-name"]');
        const agentCompany = getText('[data-testid="agent-company"]');
        const agentRating = getText('[data-testid="agent-rating"]')?.match(/[\d.]+/)?.[0];

        // Extract tax and HOA information
        const annualTax = getText('[data-testid="annual-tax"]')?.match(/\d+/)?.[0];
        const hoaFee = getText('[data-testid="hoa-fee"]')?.match(/\d+/)?.[0];

        return {
            address,
            price,
            pricePerSqft: pricePerSqft ? parseInt(pricePerSqft) : null,
            beds: beds ? parseInt(beds) : null,
            baths: baths ? parseFloat(baths) : null,
            sqft: sqft ? parseInt(sqft) : null,
            lotSize: lotSize ? parseInt(lotSize) : null,
            daysOnMarket: domMatch ? parseInt(domMatch) : null,
            agentName,
            agentCompany,
            agentRating: agentRating ? parseFloat(agentRating) : null,
            annualPropertyTax: annualTax ? parseInt(annualTax) : null,
            hoaFeeMonthly: hoaFee ? parseInt(hoaFee) : null,
            history
        };
    });

    // Add metadata and save
    const result = {
        ...propertyData,
        source_url: request.url,
        scraped_at: new Date().toISOString(),
        source: 'redfin.com'
    };

    await dataset.pushData(result);
    console.log(`Saved property: ${propertyData.address}`);
}
Enter fullscreen mode Exit fullscreen mode

Making Scraping Efficient: Performance Optimization

When you scrape Redfin data at scale, efficiency matters. Here are practical optimization strategies:

Use Pagination with Concurrency:

const crawler = new Apify.PuppeteerCrawler({
    maxRequestsPerCrawl: 5000,
    maxRequestsPerMinute: 120, // Rate limiting
    sessionPoolOptions: {
        maxPoolSize: 10, // Multiple concurrent browsers
    },
});
Enter fullscreen mode Exit fullscreen mode

Implement Intelligent Caching:
Store URLs you've already scraped to avoid duplicates:

const seenUrls = new Set(
    (await Apify.openDataset()).getData().items.map(item => item.source_url)
);

const isNewUrl = (url) => !seenUrls.has(url);
Enter fullscreen mode Exit fullscreen mode

Use Proxy Rotation:
Rotating residential proxies reduces the risk of blocking:

proxyConfiguration: await Apify.createProxyConfiguration({
    groups: ['RESIDENTIAL'],
    useApifyProxy: true,
})
Enter fullscreen mode Exit fullscreen mode

Implement Retry Logic:

const crawler = new Apify.PuppeteerCrawler({
    maxRequestRetries: 5,
    handlePageTimeoutSecs: 60,
});
Enter fullscreen mode Exit fullscreen mode

The Real Estate MCP Server: AI-Powered Property Analysis

Beyond basic scraping, the Real Estate MCP server integrates with Claude and other AI systems to enable intelligent property analysis workflows. MCP (Model Context Protocol) servers extend AI capabilities by providing specialized tools.

The nexgendata Real Estate MCP server provides:

Property Analysis Tools:

  • Market comparable analysis
  • Investment potential scoring
  • Rent vs. buy calculations
  • Cap rate and ROI analysis
  • Cash flow projections

Workflow Integration:
Using the MCP server, you can create sophisticated real estate analysis agents:

User: "Analyze this property as an investment at $1,250,000 in San Francisco"

Agent uses MCP to:
1. Pull comparable sales data from scraped Redfin inventory
2. Calculate local rental rates and cap rates
3. Analyze neighborhood trends
4. Estimate ROI scenarios
5. Compare to market averages
6. Generate investment recommendation
Enter fullscreen mode Exit fullscreen mode

The MCP server transforms raw scraped data into intelligent analysis. Instead of asking "What does Redfin data show?" you can ask "Should I buy this property?" and get an AI-powered analysis grounded in real market data.

Practical Use Cases for Scraping Redfin Data

1. Investment Property Analysis

Real estate investors use scraped Redfin data to identify deal flow:

Market: Austin, TX
Criteria: Single family, $400k-$600k, 3+ beds
Volume: 200+ properties analyzed weekly
Output: Investment scoring based on:
  - Cap rate vs. market average
  - Cash-on-cash return potential
  - Neighborhood appreciation trends
  - Rental demand signals
Enter fullscreen mode Exit fullscreen mode

Outcome: Investors reduce deal evaluation time by 70% and identify deals that manual analysis would miss.

2. Rent vs. Buy Analysis

Consumers deciding between renting and buying need comparable data:

Analysis: Should I buy in Denver?
Requires:
  - Current listing prices for target neighborhood
  - Rental rates for comparable properties
  - 5-year price history trends
  - Carrying costs (taxes, insurance, HOA)
  - Appreciation forecasts

Scraped Redfin data provides: prices, taxes, history, neighborhood stats
Result: Data-driven rent vs. buy decision
Enter fullscreen mode Exit fullscreen mode

3. Market Trend Tracking

Market analysts track price movements, days on market, and inventory levels:

Weekly tracking across 5 major metros:
  - Average sale prices and trends
  - Days on market variations
  - Inventory levels
  - Price per sqft by neighborhood
  - Listing to sale price ratios

Historical analysis reveals:
  - Seasonal patterns
  - Market acceleration/slowdown
  - Inventory pressures
  - Buyer/seller dynamics
Enter fullscreen mode Exit fullscreen mode

4. Comparative Market Analysis (CMA)

Real estate agents generate CMAs required for pricing advice and marketing:

Property: 123 Main St, Portland OR
Generate CMA using:
  - 15 recent sales within 1/4 mile
  - Similar size, condition, features
  - Price adjustments for differences
  - Days on market analysis
  - Market conditions analysis

Automated from scraped data: 10 minutes vs. 2 hours manual
Enter fullscreen mode Exit fullscreen mode

5. Real Estate Technology Platforms

Companies building real estate tools need property data at scale:

Use cases:
  - Mortgage/lending platforms need property valuation data
  - Property management tools need rental comparables
  - Insurance platforms need property characteristics
  - Real estate analytics platforms need market trends
  - Appraisal tools need comparable sales data

Scraping Redfin eliminates vendor lock-in and provides current data.
Enter fullscreen mode Exit fullscreen mode

Understanding Costs and Efficiency

When considering how to scrape Redfin data, cost is a critical factor.

Traditional Data Sources:

  • MLS data feeds: $500-$5,000+ monthly per market
  • Real estate data APIs: $2,000-$10,000+ monthly
  • Bulk data licenses: $10,000-$50,000+ annually
  • Manual research: Immeasurable time cost

Automated Scraping with Apify:

  • Cost per listing: $0.001-$0.003 (pennies per property)
  • 100,000 listings: ~$100-$300
  • 1 million listings: ~$1,000-$3,000
  • One-time or ongoing collection as needed

ROI Example:
Investor analyzing 500 properties monthly:

  • Manual research: 100+ hours ($5,000-$10,000 cost)
  • Scraped data pipeline: 2 hours setup + execution costs
  • Monthly investment data cost: ~$50
  • Annual savings: $55,000+

For professional real estate operations, the cost to scrape Redfin data is negligible compared to the value of current market intelligence.

Building a Reliable, Scalable Pipeline

Production-grade scraping requires more than code:

Monitoring:

// Track success rates
await Apify.setValue('scrape-stats', {
    listed_processed: 1250,
    successfully_scraped: 1243,
    failed: 7,
    success_rate: 0.994,
    average_response_time: 2340, // ms
    timestamp: new Date()
});
Enter fullscreen mode Exit fullscreen mode

Scheduling:
Set up recurring scrapes to maintain current data:

  • Daily updates for active listings
  • Weekly updates for price history
  • Monthly full market updates

Error Handling:

handleFailedRequestFunction: async ({ request, error }) => {
    // Retry with new session/proxy
    if (request.retryCount < 5) {
        request.retryCount = (request.retryCount || 0) + 1;
        return {
            reclaim: true,
            forefront: true
        };
    }
    // Log persistent failures
    console.error(`Failed after retries: ${request.url}`);
}
Enter fullscreen mode Exit fullscreen mode

Data Validation:
Ensure scraped data quality:

function validateProperty(property) {
    const required = ['address', 'price', 'beds', 'baths', 'sqft'];
    const missing = required.filter(field => !property[field]);

    if (missing.length > 0) {
        console.warn(`Missing fields for ${property.address}: ${missing.join(', ')}`);
        return false;
    }

    // Sanity checks
    if (property.price < 10000 || property.price > 100000000) {
        console.warn(`Suspicious price: ${property.address} at $${property.price}`);
        return false;
    }

    return true;
}
Enter fullscreen mode Exit fullscreen mode

Integrating with the Real Estate MCP Server

Once you have scraped Redfin data, the Real Estate MCP server amplifies its value through AI integration.

Architecture:

Redfin Scraper → Dataset → MCP Server → AI Agent Interface
     ↓                           ↓
  250k properties          Real-time analysis
  Price data               Investment scoring
  Market trends           Comparative analysis
  Property details        Forecasting
Enter fullscreen mode Exit fullscreen mode

Example: AI-Powered Property Recommendation

// Agent receives scraped market data
const marketContext = {
    targetMarket: "San Francisco, CA",
    listings: 1247, // from scraped data
    medianPrice: 1320000,
    priceChange90days: "+2.3%",
    avgDaysOnMarket: 21,
    totalInventory: 2341
};

// MCP server performs analysis
const analysis = await mcpServer.analyzeMarket(marketContext);

// AI generates recommendation
// "Market is appreciating 2.3% quarterly with moderate inventory.
//  Good time for long-term investment, challenging for flipping."
Enter fullscreen mode Exit fullscreen mode

Addressing Practical Considerations

Legal Aspects:
Scraping publicly visible web pages is legal and widely practiced in the real estate industry. However, review terms of service and consider:

  • Robots.txt and rate limiting
  • Respectful request rates
  • No extraction of password-protected content
  • Compliance with jurisdiction data laws

Technical Reliability:
Redfin's website changes periodically. Maintain your scraper:

  • Monitor selector failures
  • Update selectors quarterly
  • Test against actual site regularly
  • Maintain version control for scraper code

Data Freshness:
Property data updates frequently:

  • Listing price changes within hours
  • Days on market changes daily
  • New listings appear constantly
  • Sold properties are updated regularly

Schedule scrapes appropriately for your use case (daily for active decision-making, weekly for trend analysis).

Common Challenges and Solutions

Challenge: Redfin blocks too many requests

  • Solution: Use residential proxies, reduce request rate, add random delays

Challenge: Dynamic content doesn't load

  • Solution: Wait for specific selectors, increase timeout values

Challenge: Data formats vary across property types

  • Solution: Build flexible extractors, validate and normalize data

Challenge: High variability in property data completeness

  • Solution: Make fields optional, document which data is always available

Getting Started Today

To begin scraping Redfin data:

  1. Start with the Apify Actor: The nexgendata Redfin Real Estate Scraper handles the technical complexity. It's battle-tested against Redfin's structure and includes proxy rotation, error handling, and automatic data formatting.

  2. Define your market scope: Start with one city or ZIP code. Debug your data extraction against real listings.

  3. Build incrementally: Get 100 listings working perfectly before scaling to 10,000.

  4. Integrate with MCP: Once data collection is stable, connect to the Real Estate MCP server to unlock AI-powered analysis.

  5. Automate schedules: Set up daily or weekly scraping to maintain current market intelligence.

Pricing and Scaling

For small operations (under 50,000 listings/month):

  • Direct actor costs: $50-$150/month
  • Compute costs: Included in Apify platform
  • Total: Development time + minimal recurring costs

For large operations (500,000+ listings/month):

  • Direct actor costs: $500-$1,500/month
  • Compute costs: $200-$500/month
  • Total: Still 5-10x cheaper than traditional data providers

The cost to scrape Redfin data scales linearly with volume but remains dramatically lower than traditional real estate data services.

Conclusion

In 2026, real estate professionals who don't have access to current, comprehensive market data are at a significant disadvantage. Redfin contains some of America's most valuable property information, but its lack of an API shouldn't prevent you from accessing it.

The combination of automated scraping and AI-powered analysis represents the modern approach to real estate intelligence. Instead of paying premium prices to data brokers or spending countless hours on manual research, forward-thinking investors, agents, and developers are building their own data pipelines using tools like the Apify SDK and intelligent analysis through MCP servers.

The technical barrier to scraping Redfin data has become minimal. The real competitive advantage lies in what you do with the data once you have it—which is why the Real Estate MCP server integration is so powerful. AI agents armed with current market data can identify opportunities, analyze risk, and generate insights that would take humans weeks to produce.

Ready to build your real estate data pipeline?

Start with a pilot project in your target market. The cost is minimal, and the insights you'll gain from current, comprehensive Redfin data will quickly justify the investment in your real estate operation.

Top comments (0)