Introduction
A few years ago in 2022, I attended SCaLE 19x. For those who are not aware, SCaLE is an acronym which stands for Southern California Linux Expo and is the name of a community-run open-source and free software conference held annually in the greater Los Angeles area.
Among the many talks given, one I attended was titledEnrich Your Enterprise Knowledge Graph with Linked Open Data via JSON-LDand presented by an individual named Jans Aasman. The talk was quite informative and broadly covered how linked open data was used and growing. Linked open data started in the semantic web community many years ago and is a tool which can be used for making web pages understandable by machines.
Subsequently, one interesting thing I learned about from the talk was how linked data could be used to enable developments in artificial intelligence since it can help make documents more understandable for machines. Given the recent advances in AI, the talk served as a good primer, sparking my interest in that area.
Unfortunately, I wasn’t aware of any practical applications for the information at the time. The talk ended with a note that Google uses JSON-LD (JavaScript Object Notation for Linked Data) for product descriptions, but that didn't resonate with me since I wasn't working on anything having to do with search at the time. Since then, I've started a blog and returned to this technology after gaining an interest in search engine optimization.
Understanding JSON-LD
What is JSON-LD? Here's a quote from Wikipedia's JSON-LD definition:
JSON-LD (JavaScript Object Notation for Linked Data) is a method of encoding linked data using JSON.
From what I understand, linked data allows the linking of data entities to each other. For example, one could define a person entity and have it linked to an article entity where the article has an author. One way to make use of JSON-LD is for SEO purposes using type definitions from https://schema.org.
As an example, here is what an ImageObject instance would look like:
{
"@context": "https://schema.org",
"@type": "ImageObject",
"contentUrl": "https://logarithmicspirals.com/some-image.png"
}
From what I remember of Jans Aasman's talk at SCaLE, linked data can be used to buildknowledge graphs for both public and private use. JSON-LD is a tool to make accessing this data very simple by leveraging the simplicity and support of JSON across many programming languages. In the case of this article, JSON-LD helps contribute toGoogle's knowledge graph by providing structured data about the article.
When I first attended the talk, I wasn't working on any projects which could readily benefit from structured data or JSON-LD. However, after starting this blog I came back around to the technology after exploring the Google Search Console. Google and other search engines support certain schemas for enhancing search results. Google provides documentation about this in an article titledStructured data markup that Google Search supports.
Implementing JSON-LD for Rich Results
The first step of implementing JSON-LD for rich results was to connect the structured data markup supported by Google with the content I offer on my blog. Features like carousels for restaurant reviews aren't really something I can make use. That said, I could definitely use breadcrumb, article, and image markup.
As such, I decided to start with article markup. The article markup ended up being the most involved and least rewarding of the three. In this case, rewarding means it shows up in Google Search Console under the enhancements tab.
To start creating the JSON-LD, I looked for a TypeScript package I could install to help. Thankfully, I was able to find the package schema-dts which allows developers to create objects matching schemas from https://schema.org.
After installing the package with npm install schema-dts
, I got to work on some refactoring to make it easy to add schemas to various pages.
Blog Post Article Markup
The first thing I did was create a file src/utils/get-schema.ts
. In that file, I added the following code:
import type { CollectionEntry } from "astro:content";
import type { BlogPosting, WithContext, Person } from "schema-dts";
const defaultCreatorName = "Logarithmic Spirals";
const defaultCreator: Person = {
"@type": "Person",
"name": defaultCreatorName,
"url": "https://logarithmicspirals.com/about/"
};
const getBlogPostingSchema = (post: CollectionEntry<'blog'>, url: string): WithContext<BlogPosting> => {
const { title, description, tags, pubDate, updatedDate, heroImage } = post.data;
const image = heroImage.src;
return {
"@context": "https://schema.org",
"@type": "BlogPosting",
"headline": title,
"description": description,
"keywords": tags,
"author": defaultCreator,
"datePublished": pubDate.toISOString(),
...(updatedDate && { "dateModified": updatedDate.toISOString() }),
"image": image,
"inLanguage": "en-US",
"url": url
};
};
export { getBlogPostingSchema }
The code uses the frontmatter schema I defined when I originally created the site (although it has been modified over time). Note that this code is partially influenced by code from an article titledAdding structured data to blog posts using Astro.
The getBlogPostingSchema
can then be used to render the blog post pages using the blog post data.
Next, I had to update the code for my layouts to add the schema to the various pages it needs to be on. To do this, I had to update my base page layout. The base page layout is a custom layout I have defined atsrc/layouts/BasePage.astro
. Here's what that looks like:
---
import type { WithContext, Thing, WebPage, WebSite, BreadcrumbList } from "schema-dts";
import BaseHead from "@components/BaseHead.astro";
import Body from "@layouts/Body.astro";
interface Props {
title: string;
description: string;
image?: string;
hasNavBar?: boolean;
schema?: Thing;
breadcrumb?: BreadcrumbList[] | BreadcrumbList;
}
const { title, description, image, hasNavBar, schema, breadcrumb } = Astro.props;
const webSiteSchema: WithContext<WebSite> = {
"@context": "https://schema.org",
"@type": "WebSite",
"name": "Logarithmic Spirals",
"url": Astro.site?.toString()
};
const webPageSchema: WithContext<WebPage> = {
"@context": "https://schema.org",
"@type": "WebPage",
"url": new URL(Astro.url.pathname, Astro.site) + "/",
"description": description,
"isPartOf": webSiteSchema
};
if (breadcrumb) {
webPageSchema.breadcrumb = breadcrumb;
}
let schemaToUse: Thing = webPageSchema;
if (schema) {
schema["isPartOf"] = webPageSchema;
schemaToUse = schema;
}
---
<!doctype html>
<html lang="en">
<head>
<BaseHead title={title} description={description} image={image} />
<script is:inline type="application/ld+json" set:html={JSON.stringify(schemaToUse)} />
</head>
<Body hasNavBar={hasNavBar}>
<slot />
</Body>
</html>
Note the separation of schema and breadcrumb. I treat those separately since not every page with a schema has a breadcrumb. The most important piece is the injection of the JSON-LD into the page through a script tag in the head of the document.
The article schema then gets passed in through child pages like this:
---
// ...
const schema: BlogPosting = getBlogPostingSchema(
post,
new URL(Astro.url.pathname, Astro.site).toString() + "/",
);
---
<BasePage schema={schema}>
</Base>
Breadcrumb Markup
For a while now, my pages have already had breadcrumb navigation elements on the individual blog post and tag pages. However, it looks like Google doesn't quite understand the HTML breadcrumb as well as it does the JSON-LD breadcrumb (note that by HTML I mean plain HTML without embedded structured data).
After adding the breadcrumb markup, I've seen links to my site have changed in Google. Where I once saw trails likehttps://logarithmicspirals.com > blog > website-migration-aws-amplify-to-cloudflare-insights
, I now seehttps://logarithmicspirals.com > Blog
. While a subtle difference, it shows that Google is paying attention to the newly added structured data.
To implement the breadcrumbs, I first added a new function to src/utils/get-schema.ts
:
const getBlogPostBreadcrumbListSchema = (title: string, site?: URL): WithContext<BreadcrumbList> => {
return {
"@context": "https://schema.org",
"@type": "BreadcrumbList",
itemListElement: [
{
"@type": "ListItem",
position: 1,
name: "Home",
item: site?.toString(),
},
{
"@type": "ListItem",
position: 2,
name: "Blog",
item: new URL("blog", site) + "/",
},
{
"@type": "ListItem",
position: 3,
name: title,
},
],
};
};
The output of this function creates JSON-LD matching the examples given by Google inBreadcrumb (BreadcrumbList) structured dataand is intended to be used on pages matching this path scheme https://logarithmicspirals.com/blog/{slug}
.
Image Markup
Lastly, another opportunity I saw to make use of Rich Results was on the /gallery/
page of my website. To achieve this, I created the following functions in src/utils/get-schema.ts
:
const getImageObjectSchema = (imageNode: ImageNode): WithContext<ImageObject> => {
return {
"@context": "https://schema.org",
"@type": "ImageObject",
"contentUrl": imageNode.url,
"creator": defaultCreator,
"creditText": defaultCreatorName,
"copyrightNotice": defaultCreatorName
};
};
const getImageGallerySchema = (imageNodes: Array<ImageNode>): WithContext<ImageGallery> => {
return {
"@context": "https://schema.org",
"@type": "ImageGallery",
"image": imageNodes.map(imageNode => getImageObjectSchema(imageNode))
};
};
The ImageNode
type is a custom type I created and looks like this:
export type ImageNode = {
key: string,
title: string,
technical: {
bodyMake: string,
bodyModel: string,
focalLength: string,
iso: string,
lensMake: string,
lensModel: string
},
url: string,
nodeIndex?: number
};
For some better understanding of the ImageNode
type, check out this previous article I wrote:Photo Gallery Migration: Gatsby to Astro Follow-Up.
The schema generation is then used like this:
---
// Import the IMAGES constant which is an Array of ImageNodes.
// ...
const schema = getImageGallerySchema(IMAGES);
---
<BasePage schema={schema}>
</BasePage>
Results and Google Search Console Enhancements
So far, I haven't seen a noticeable increase in traffic through Google Search, but to be fair, my blog is quite niche, and the changes haven't been live for long.
In terms of search appearance, the breadcrumbs are showing up and I've yet to check on the results for the image markup.
The most notable change has been in the Google Search Console where I'm now seeing an Enhancements section with information about the breadcrumbs and images.
Additionally, Google also offers the Rich Results Test which allows one to test if their markup is valid for Rich Results. This tool was quite useful while I was developing the update locally since it allows one to paste the code directly in.
Conclusion
For a small project, JSON-LD may or may not offer any benefits. In the case of my blog, it made sense to try and incorporate some into pages since it helps improve the way Google understands the content of the pages. Additionally, my blog is a tech blog, so taking the time to explore this technology offered an additional benefit of giving me something to write about. However, not every blog might benefit from incorporating it like mine.
Regardless, I think this is something other developers should consider exploring since it has a rich history and some practical applications in the web development space. For example, someone working on an eCommerce site may find rich results on Google lead to more customer engagement through organic search.
Moving forward, I want to try to find some other opportunities for improving the use of structured data on my website. One thing I would like to do is add markup for the images on blog posts. I could also explore markup for videos if I ever get around to trying my hand at making video content.
Top comments (0)