Hey Reader!
I've been seriously irked lately. Client sites are getting hammered with spam and bot traffic, primarily from China and Singapore, and I've been running experiments to see what I can filter out in Google Tag Manager before it ever pollutes our client’s production GA4. Here's where I'm at so far:
- GeoIP — block based on country. Only worked a little, since the IPs that spammers use typically aren't in geoIP databases.
- Browser version — most spammers are using an older version of Chrome, but some are using newer ones, so this only helped a bit.
- Time Zone — I'm testing this now to detect the browser's time zone and see if that provides a cleaner signal. Initial results are promising but this only works well if you aren’t expecting to have any visitors from China/Singapore time zones.
- Webdriver — most headless browsers report this as true, but smart spammers will report false (as these spammers are).
- Languages — spam browsers will usually only have one language set.
- Plugin count — spam browsers typically have zero plugins.
The jury is still out on which one or combination of these will actually hold up at scale. I'll report back in the next issue with what I've learned! If you've dealt with this problem and found something that works, I'd love to hear about it.
Stop Letting Ad Platforms Guess What "Good" Looks Like
Speaking of traffic quality, how do you actually tell Google Ads or Meta what a quality visit looks like, beyond just waiting for a form fill?
My latest guide walks through how to build a quality traffic metric in GTM: a composite event that requires two things to be true at the same time — a time threshold (like 30 seconds) AND a scroll depth or element visibility trigger. When both conditions are met, the event fires. One Trigger Group in GTM, multiple tags sending the signal to Google Ads, Meta, and GA4.
What I appreciate about this approach is that it sits right in the middle of your measurement hierarchy: more meaningful than "technically engaged," less restrictive than a form submission. And it gives you a diagnostic matrix for troubleshooting campaigns as low quality traffic rate alongside low conversion rate tells you something very different than healthy quality traffic rate with low conversions.
Read the full guide →
Articles Worth Your Time ———•
|
Log File Analysis Is Back on the Menu (Did It Ever Actually Go Away?)
SearchPilot pulled together a roundup of the most underrated retail SEO levers for 2026 from in-house folks and one point jumped out at me immediately: log file analysis is having a moment again.
Antonis Konstantinidis from Charlotte Tilbury makes the case that with all the talk about LLMs and bots, most teams still aren't actually checking their server logs to see what those bots are doing. Logs tell you whether crawlers are reaching your key pages, whether they're being blocked or slowed down, and whether a recent deployment accidentally cut something off.
Sure, this piece is retail-focused, but the principles apply everywhere. Do you have a way to easily analyze your log files? If not, this might be the nudge to set that up. The rest of the roundup is worth your time too.
From Keywords to Topics: What That Actually Means for Your Analysis
Ahrefs published a thorough explainer on how to focus on topics rather than individual keywords in your SEO strategy. Another SEO post? I promise this one has a very relevant analytics angle.
What I was thinking while I was reading is this is where analysis can really help. If you're working with Google Search Console data, building a blend that groups keywords by topic gives you a much more useful picture of performance than keyword-by-keyword reporting. Instead of watching 50 individual queries go up and down, you can see whether your topical coverage is growing or shrinking as a whole.
Worth reading, and worth thinking about how you'd structure your own reporting to reflect topical performance rather than just individual query rankings.
Why "I Don't Trust Their Data" Is a Positioning Problem
I know, another “this isn’t analytics!!!” post. Hear me out.
April Dunford wrote a guide to advanced B2B positioning on Lenny's Newsletter that made me think immediately about how analytics practitioners are perceived inside organizations.
Analytics can be a weird, siloed discipline that stakeholders either over-rely on or quietly distrust. One of the hardest situations I encounter is when there are known data quality issues and leadership uses those gaps as a reason to dismiss findings entirely ("we can't trust this data") instead of making decisions with appropriate caveats.
April's piece is really about how positioning shapes the lens through which people evaluate what you do. If stakeholders understand why a gap exists and whether it actually affects the specific decision at hand, they're in a much better position to move forward confidently. The difference between "I don't trust their data" and "we're aware of these gaps, but they don't affect this decision" is largely a positioning and communication problem, not just a technical one.
Where You Can Find Me ———•
|
Women in Tech SEO — Portland
I'll be speaking at the Women in Tech SEO conference in Portland this May! If you're going to be there, come find me — I'd love to meet some of you in person.
That's it for this edition of The Huddle. As always, if you have questions or want to share what you're working on, just hit reply!