Why are paywalls so weak?

twitter logo github logo ・1 min read

As part of my day job I've been investigating (soft) paywall implementation.

As I poke at various influential sites I'm finding all of them are very soft. When a user is logged out, they all seem to be using localStorage with simple integer counters, or sometimes json with a record of the articles read.

Obviously there are serious technical, privacy and (possibly) legal challenges to tracking logged out users. And I would never want sites to implement something like that.

But why don't more site force user login immediately?

With tracking limited to localStorage, users can bypass the wall by switching to incognito, switching browser or clearing cache; and often these steps are less onerous than actually signing up for an account.

Why don't publisher care?

twitter logo DISCUSS (6)
markdown guide
 

With tracking limited to localStorage, users can bypass the wall by switching to incognito

I'd guess the average web user doesn't have the slightest clue about how they might do this. Even the part of switching to incognito, which seems low stakes may not be obvious.

Furthermore, in certain mobile contexts it might be even less obvious or even impossible.

 

Publishers do care, but us nerds who know how to get around soft paywalls are factored into the cost of doing business, and the relative loss of the revenue of software developers who refuse to pay for content is negligible for a major publication.

BTW, the same conversation can be had around region-restricted content and VPNs. Ultimately, the number of people accessing restricted content with VPNs is so low companies can't even see it on their bottom line. If that were to change, however, expect streaming services to start suing VPN providers for damages.

 

I'm inclined to agree with you. I'm wondering if anybody has ever published stats/studies that quantify this. It would be nice to give our stakeholders something more concrete than "...well that's how everybody else is doing it."

 

Probably, because the content must be indexed by search engines. Mandatory login would prevent that.

 

Well, there are ways to expose content to robots and not people.
Easy to work around, but still harder than clearing LS ;)

 

Yes, but Google required to show the same content to robots and people in the past... I'm not sure if that's still a thing though.

Classic DEV Post from Jun 19

Is GraphQL the future of APIs?

Graphs are everywhere! What are the main benefits of the data graph structure? Is GraphQL the future of APIs?

Ryan profile image
I've got this store-bought way of saying that I'm OK.

Sore eyes?

dev.to now has dark mode.

Go to the "misc" section of your settings and select night theme ❤️