Canaries with credit cards

This week we have more evidence that AI is shit, not that the hypemonsters will care.

We have watermelon projects and legal musings, and a bunch more tech companies firing people.

Slightly more fun is setting up booby-trapped credit card numbers for attackers to stumble over.

There’s a longer read on how system incentives lead to enshittification, and how to detect them.

This week’s tip is on choosing good examples when teaching other people how your thing works.

Things to note

Last week I attended Cloud Field Day 16 remotely. Some interesting presentations from Forward Networks, Fortinet, and that I’ll write about in more detail shortly.

Surprise! That allegedly AI lawyer is a terrible lawyer. Who could ever have predicted, etc. This takedown by Kathryn Tewson is a fun read.

Dell has acquired automation software maker Cloudify for about $100 million.

The Information has an interesting writeup about a cancelled internal project at Stripe. Watermelons, projects that are green on the outside but red on the inside, are extremely common. Especially in large organisations.

If you have a favourite anecdote about a watermelon project, I’d love to hear about it, so please email me! Good ones wil be shared with the list (after redacting anything that might get us all sued).

SAP is firing people, too. About 2.5% of its staff, which is ~3,000 people. No word on how much the change to customisations will cost but the BASIS people were seen rubbing their hands together and cackling manically.

IBM is also firing people. 1.5% of global workforce, or ~3,900 people, but it’ll be offset by hiring in “higher-growth areas”. Revenue on mainframe went up 16% since the Z16 came out last year, so I guess this means hiring people who can put Openshift on CICS.

Stripe isn’t likely to go public any time soon, though. The Information is also reporting that a private capital raising is more likely. I expect a bunch of down-round news to start leaking out from various companies that raised funds early in the frenzy of the past couple of years but who need to go get more.

A decent summary of all the AI hype/bullshit from a professor with a book to sell. I will keep urging people to read the Stochastic Parrots paper until 80% of the people I meet have read it. Forward this email to anyone who says something dumb about ChatGPT being the end of ${THING}.

San Francisco apparently wants regulators to slow or stop the rollout of robotaxis that aren’t safe enough to operate on public roads. The article states that “[n]either [sic] vehicles from Cruise or Waymo have killed anyone on the streets of San Francisco”. I’m sure they’ll get there, given enough time.

In GitHub Copilot lawsuit news, Microsoft, GitHub, and OpenAI reckon the allegations aren’t specific enough, and claim fair-use. This is just procedural manoeuvring, of which there will be a lot. I was interested to note the companies citing the Oracle v Google API case as supporting their position.

More on legals: The UK-based Financial Times FTAlphaville subunit has decided to stop running their own Mastodon instance. It turns out due diligence isn’t just for financial investments. The difference between enabling reader comments on your glorified blog and running a Mastodon instance is left as an exercise for the reader.

Thinkst Canary has a new canary token type: credit cards! This is extremely cool. For those who are unaware of you should go check it out right now. We’ll wait here until you get back so you don’t miss anything. Seriously, it’s that good.

Longer reads

Cory Doctorow wrote an essay about enshittification a week or so ago.

The underlying theme here is structural incentives. You get more of what you reward, and less of what you don’t. If you get more of one thing and less of another, you know that the incentives are set up to do that.

It doesn’t matter what the system designers intended, because the purpose of a system is what it does. If the system does something other than what was intended, it’s not going to suddenly start doing something else just by wishing harder.

Weekly tip: Concrete examples are important

This week’s column is partially inspired by Forward Networks’ presentation at #CFD16 last week, but also related to some work I’ve been doing recently.

I’m a bit of an abstract thinker. I like imagining what I could do with a thing that goes beyond what the designers had in mind. I enjoy finding ways to break stuff or make it do things it wasn’t supposed to.

But not always.

Sometimes I’m busy and just want to get something done. I don’t want to implement a framework or AbstractThingDoerFactory. I want to run and then use the thing to achieve a goal.

Making it easy for me to do that requires good documentation, and concrete examples. There’s a reason that “How to do X” articles, podcasts, and videos are extremely popular.

But finding a good, concrete example to use when building that documentation is quite tricky. It needs to be broad enough that the whole audience will care, but not so broad that individuals in the audience find the example too vague, given their current understanding. The details need to be specific enough that people without a lot of context or background can follow along, but also not so specific that it’s hard to extrapolate from the example to other potential use cases.

This sweet spot of not-too-vague and not-too-specific is also a moving target. Different people have different degrees of familiarity with the topic, different history, experiences, and assumptions that they bring with them. These need to be taken into account so you can find a decent compromise position that doesn’t annoy anyone too much.

There are some guiding principles that make it easier to succeed, though:


“Here is what good looks like” examples, sometimes called exemplars, are great.

The challenge here is showing when to deviate from the exemplar and why, but the exemplar makes a great jumping off point for what will generally turn out to be lots of different explanations. You also want to show solving a real problem, or something very close to it. Toy problems are only useful for the most trivial examples and the most novice of audiences. They get boring quickly.


Anecdotes are also great. Real people doing real things are rarely bad choices to use as illustrations of a broader point.

Like the exemplar, they provide a great place to show a few major points before then discussing how you might want to make different choices. Keeping the digressions focussed on a specific choice can help.

I’m doing it right here in this column itself.


Don’t try to pack too much into your example all at once. Instead, clearly explain what the example does cover, and what it doesn’t. Ideally, provide some guidance for where to find more detail, or warning that it’s not readily available.

This is mostly about managing expectations. When learning something new, people can only take in a certain amount.

Be mindful of assumed knowledge

Have a think about what you assume your audience already knows.

This week I’ve been wrestling with reference materials that assume a lot of knowledge that I simply don’t have, because I’m new. Other reference materials I’m using don’t suffer from this problem, and are much easier to use, even when I do have the relevant background knowledge.

As much as possible, make the assumed knowledge explicit. This helps with pacing, as discussed above.

That’s all the guidance I’ll provide here. There’s lots of material available on lesson planning and other teaching/pedagogy approaches available from reliable sources like your local government education department, or ask your local librarian.

See Also

Related items

Cold Cyberwar

28 December 2022

Hacking things is too easy, but Bluey is here to make things better.

Quantum of Anti-Trust

18 July 2022

Lots of (in)security news, valuation shenanigans, and a smidgen of antitrust.

Rearranging Deckchairs

11 July 2022

Lots of (in)security news, valuation shenanigans, and a smidgen of antitrust.