One of most interesting things when blocking third-party content on websites by default, is how web developers handle that. The best example might be captchas.
First, I almost never notice that they're missing. Because they're just embedded without any hint that they exist (e.g. a frame or just „Captcha:“). Second, it's interesting to see what happens when proceeding. Some sites are pretty clear („Captcha hasn't been solved“), some just give you a generic error, some just won't proceed at all.
My hardest riddle to solve so far was a form that I couldn't send. Because the address I entered was supposed to be validated by Google Maps. Obviously, there was neither a hint that this was to happen, nor any asking for consent before sending a query of someone's address to Google.
As with every development, clear design and good error handling are important. If you happen to build websites, you might try their usability with NoScript or uMatrix turned on before the deployment. :)
On the other hand, it's remarkable how many external resources are usually loaded, even though they have no impact on the functionality of the website whatsoever.
I'm using those tools for quite some time now, but I'm still surprised about how much unnecessary stuff is added to websites regularly.
@esureL Ugh, right? Watching websites degrade disgracefully is usually a dumpster fire. OTOH I'm always impressed when I visit a site and it loads mostly functional despite only having having access to 1st-party CSS and images.
@Ephaemera True. That's something I noticed, too. See post three. :D
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!