Week 49-50, 2024 - Wind Down
It's a slow end of the year indeed, hence the lack of a newsletter last week. This is probably the final issue for 2024, and it will only contain a few updates and articles I didn’t want to sit on any longer. To start with, I had a bunch of new content that was not visible to you following this newsletter.
I guest-posted on the Besides Code newsletter about the Developer to Engineering Manager path. The article describes one of the approaches I saw working in the IC2EM transition - there are many. This topic of starting as a new manager is something that I want to focus on much more deeply in 2025.
We’re on full speed ahead with Jeremy, and have published two new episodes to our podcast The Retrospective:
- First, about recognizing and dealing with Technical Debt. We wanted to move away from the overwhelming assumption that Technical Debt is something bad that must be avoided at all costs and provide a more pragmatic framework for embracing it while keeping it at bay.
- Second, just out this week, in a super-practical episode, we discussed interruptions and their impact on teams and introduced the Firefighter role, a concrete tool to deal with them while staying responsible to stakeholders.
I’m excited about this format of shorter, more topical episodes in the second season of our podcast. The best way to ensure you’ll stay up-to-date with this content is to sign up for the dedicated blog for the podcast where we share show notes and similar related information together with the new episodes.
📋 What I learned recently
The expression “prompt engineering” is slowly growing on me. Being able to efficiently interact with the chat-based interface of large language models is a strong differentiator. It decides if the outcome will be something truly useful, or just another mindless cliché illustration or a wall of text fluffed up with buzzwordy management-speak in the ocean of crap content ChatGPT and its peers unleashed on us. Some of my recent learnings in this area:
- Nowadays I spend more time creating the context for a chat than the chat itself. I ensure I share relevant documents, example texts, and other related materials that can add to the process. Once done, my kickoff message is usually very detailed, explaining the role I want the LLM to play, who I am in this scenario, and the outcome, my goal with the discussion. Finally, I detail the behavior I expect from it — for example, I usually prompt Claude to be constructive and critical, to counter the default settings of agreeing with whatever the user says.
- Since I need to be verbose, I started to experiment with XML tags to better structure my messages to help the LLM in efficient processing. Note that there's no standard set of keywords, just use what you feel best describes your content.
- ChatGPT, Claude AI, and as far as I know, all the other chat-based LLMs work by sending over the full chat history at every step. The LLM processes the text in its entirety and responds based on this whole context. In longer discussions, this can degrade the experience in various ways: missing key points, wandering focus, slowness, and most importantly, depleting usage limits. To counter this, I started to play with “handover documents”: when I feel I reached an important milestone, my last prompt in a chat is to task the LLM to create an artifact that captures the final stage of where we are, the context, everything, that a person can use to get up to speed if they join our project. Once done, I can close the chat, and use this document as a starter piece of a new conversation with the same LLM, continuing the work. (In Claude, you can add an artifact to the current project with a single click of a button.)
🤔 Articles that made me think
Learnings from an Air Traffic Control Incident
Chris Evans from incident.io did what most of us won’t do, and dug through the 84 pages of the incident report following a major disturbance in the UK airspace last summer. The key takeaways at the end are gold, especially the “no root cause” one. The natural pressure to label things properly pushes us towards searching for a root cause we can point at, but if you’ve already written some incident reports, you know that the “Five Whys” tool sounds simpler in theory than practice. Embracing that sometimes there’s no single root cause, just a weird coincidence of niche events, can help us releive the pressure, and discover all the factors that contributed, resulting in a better understanding of our systems - both technical and social.
Introducing DX Core 4
Exciting new tool in productivity measurement. Seems like a good balance between DORA, that might be too far from the human aspect for some; and SPACE, that’s a bit too theoretical. Core 4 offers a pragmatic mix of these, grouping primary and secondary metrics under the four key areas of Speed, Effextiveness, Quality and Impact. I appreciate the emphasis on measuring for support, not control, and the explicit warning against using things like PR per engineer on an individual level.
That’s it for 2024, may you reach your most ambitious goal next year,
Péter