If you're tracking brand perception in the Korean market, Naver Blog is the single most important data source you're probably not monitoring. With over 30 million active blogs and deeply integrated into Korea's digital ecosystem, Naver Blog is where Korean consumers share honest reviews, product comparisons, and brand experiences.
This guide shows you how to build automated brand monitoring pipelines using the Naver Blog Scraper on Apify.
Why Naver Blog Matters for Brand Monitoring
Google dominates search globally, but in South Korea, Naver holds ~60% of search market share. Naver Blog posts rank prominently in Naver search results, making them a primary channel for:
- Product reviews — Korean consumers actively blog about purchases
- Restaurant and cafe reviews — The "맛집" (restaurant) blogging culture is massive
- Brand sentiment — Unfiltered opinions that don't appear on official review platforms
- Influencer content — Many Korean influencers use Naver Blog as their primary platform
If you're only monitoring Twitter/X and Google for Korean brand mentions, you're missing the majority of the conversation.
The API Problem
Naver does offer a Blog Search API, but it comes with significant limitations:
- No full content — The API returns only titles and brief snippets (max ~200 characters)
- No engagement metrics — No access to likes, comments, or share counts
- No hashtags — Tags attached to posts aren't included
- Rate limits — 25,000 calls/day, which sounds generous until you need full content
- Pagination cap — Maximum 1,100 results per query
For serious brand monitoring, you need the full post text (for sentiment analysis), engagement metrics (for impact assessment), and hashtags (for trend tracking). The official API gives you none of these.
Solution: Naver Blog Scraper on Apify
The Naver Blog Scraper extracts complete blog post data including full content, engagement metrics, hashtags, and metadata — all without needing a Naver API key.
What You Get
Each extracted post includes:
| Field | Description |
|---|---|
title |
Blog post title |
url |
Direct link to the post |
authorName |
Blogger's display name |
publishDate |
Publication date and time |
fullContent |
Complete post text |
contentLength |
Character count |
hashtags |
Tags attached by the author |
sympathyCount |
Likes ("공감") |
commentCount |
Number of comments |
shareCount |
Number of shares |
categoryName |
Author's blog category |
thumbnailUrl |
Main image URL |
Code Example 1: Brand Mention Monitoring
Track daily mentions of your brand and extract sentiment-ready data:
from apify_client import ApifyClient
from datetime import datetime
client = ApifyClient("YOUR_APIFY_TOKEN")
def monitor_brand_mentions(brand_keywords: list[str], max_results: int = 50):
"""Monitor Naver Blog for brand mentions, sorted by date."""
run = client.actor("oxygenated_quagmire/naver-blog-search").call(
run_input={
"queries": brand_keywords,
"maxResults": max_results,
"sortBy": "date",
"includeFullContent": True,
"includeMetadata": True,
}
)
mentions = []
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
mentions.append({
"title": item["title"],
"url": item["url"],
"date": item["publishDate"],
"content": item.get("fullContent", ""),
"likes": item.get("sympathyCount", 0),
"comments": item.get("commentCount", 0),
"hashtags": item.get("hashtags", []),
"query": item["searchQuery"],
})
print(f"Found {len(mentions)} mentions for {brand_keywords}")
return mentions
# Track mentions of a cosmetics brand
mentions = monitor_brand_mentions(
["이니스프리 리뷰", "innisfree 후기"],
max_results=100
)
# Filter high-engagement posts
hot_posts = [m for m in mentions if m["likes"] > 10 or m["comments"] > 5]
print(f"{len(hot_posts)} high-engagement posts found")
Schedule this as a daily Apify task, and you have continuous brand monitoring without maintaining any infrastructure.
Code Example 2: Competitive Analysis
Compare blog coverage volume and sentiment across competitors:
import json
from collections import Counter
def competitive_analysis(competitors: dict[str, list[str]], max_results: int = 100):
"""
Compare blog coverage across competitors.
competitors: {"Brand A": ["keyword1", "keyword2"], ...}
"""
results = {}
for brand, keywords in competitors.items():
run = client.actor("oxygenated_quagmire/naver-blog-search").call(
run_input={
"queries": keywords,
"maxResults": max_results,
"sortBy": "date",
"includeFullContent": True,
"includeMetadata": True,
}
)
posts = list(client.dataset(run["defaultDatasetId"]).iterate_items())
# Aggregate metrics
total_likes = sum(p.get("sympathyCount", 0) for p in posts)
total_comments = sum(p.get("commentCount", 0) for p in posts)
avg_content_len = (
sum(p.get("contentLength", 0) for p in posts) / len(posts)
if posts else 0
)
# Extract trending hashtags
all_tags = []
for p in posts:
all_tags.extend(p.get("hashtags", []))
top_hashtags = Counter(all_tags).most_common(10)
results[brand] = {
"post_count": len(posts),
"total_engagement": total_likes + total_comments,
"avg_content_length": round(avg_content_len),
"top_hashtags": top_hashtags,
}
# Print comparison
for brand, data in results.items():
print(f"\n📊 {brand}")
print(f" Posts: {data['post_count']}")
print(f" Engagement: {data['total_engagement']}")
print(f" Avg Length: {data['avg_content_length']} chars")
print(f" Top Tags: {[t[0] for t in data['top_hashtags'][:5]]}")
return results
# Compare coffee brands in Korea
competitive_analysis({
"Starbucks": ["스타벅스 후기", "스타벅스 리뷰"],
"Mega Coffee": ["메가커피 후기", "메가커피 리뷰"],
"Compose Coffee": ["컴포즈커피 후기", "컴포즈커피 리뷰"],
})
This gives you a quick snapshot of how your brand stacks up against competitors in terms of blog coverage volume, reader engagement, and associated hashtags.
Code Example 3: Trend Detection
Detect emerging trends by tracking hashtag co-occurrence patterns:
from collections import defaultdict
from itertools import combinations
def detect_trends(topic_keywords: list[str], max_results: int = 200):
"""Detect trending topics and hashtag clusters from Naver Blog."""
run = client.actor("oxygenated_quagmire/naver-blog-search").call(
run_input={
"queries": topic_keywords,
"maxResults": max_results,
"sortBy": "date",
"includeFullContent": True,
"includeMetadata": True,
}
)
posts = list(client.dataset(run["defaultDatasetId"]).iterate_items())
# Build hashtag co-occurrence graph
co_occurrence = defaultdict(int)
tag_frequency = Counter()
for post in posts:
tags = post.get("hashtags", [])
for tag in tags:
tag_frequency[tag] += 1
for pair in combinations(sorted(set(tags)), 2):
co_occurrence[pair] += 1
# Find trending clusters
print(f"\n🔍 Analyzed {len(posts)} posts")
print(f"\n📈 Top 15 Hashtags:")
for tag, count in tag_frequency.most_common(15):
print(f" #{tag} ({count} posts)")
print(f"\n🔗 Strongest Tag Pairs:")
top_pairs = sorted(co_occurrence.items(), key=lambda x: x[1], reverse=True)[:10]
for (t1, t2), count in top_pairs:
print(f" #{t1} + #{t2} ({count} co-occurrences)")
# Engagement-weighted trends
tag_engagement = defaultdict(int)
for post in posts:
engagement = post.get("sympathyCount", 0) + post.get("commentCount", 0)
for tag in post.get("hashtags", []):
tag_engagement[tag] += engagement
print(f"\n🔥 Highest Engagement Tags:")
for tag, eng in sorted(tag_engagement.items(), key=lambda x: x[1], reverse=True)[:10]:
print(f" #{tag} (total engagement: {eng})")
return {
"post_count": len(posts),
"top_tags": tag_frequency.most_common(15),
"tag_pairs": top_pairs,
"engagement_tags": sorted(
tag_engagement.items(), key=lambda x: x[1], reverse=True
)[:10],
}
# Detect trends in Korean skincare
trends = detect_trends(["스킨케어 추천", "피부관리 루틴", "화장품 리뷰"])
Run this weekly and compare results over time to spot rising trends before they peak.
Getting Started
- Create a free Apify account at apify.com
- Get your API token from Settings → Integrations
-
Install the client:
pip install apify-client - Try the Actor: Naver Blog Scraper — click "Try for free" to test with your own keywords
Quick Test
from apify_client import ApifyClient
client = ApifyClient("YOUR_APIFY_TOKEN")
run = client.actor("oxygenated_quagmire/naver-blog-search").call(
run_input={
"queries": ["강남 맛집"],
"maxResults": 10,
"sortBy": "date",
"includeFullContent": True,
}
)
for item in client.dataset(run["defaultDatasetId"]).iterate_items():
print(f"{item['title']} — {item.get('sympathyCount', 0)} likes")
Pricing
The Actor runs on Apify's pay-per-usage model. A typical run extracting 100 posts with full content costs approximately $0.05–0.10 depending on content length. The free tier includes $5/month of platform credits — enough for regular monitoring of several brands.
What's Next
This is part of a growing collection of Korean data extraction tools on Apify, covering Naver Place, Melon Charts, Musinsa, Daangn, Bunjang, YES24, and more. If you're building data pipelines for the Korean market, check out the full suite.
Questions or feature requests? Open an issue on the Actor page or find me on X @sessionzero_ai.
Top comments (0)