This post reviews Beyond Coding by Addy Osmani, cutting through the hype around vibe coding to draw a clear line between prompt-only experimentation and professional AI-assisted software engineering. It highlights why the book’s perspective—backed by its O'Reilly Media pedigree—matters for real-world teams, with practical frameworks, classifications, and hard-earned warnings about quality, security, and skills atrophy. The takeaway is refreshingly balanced: AI is already reshaping how we build software, but engineers remain firmly responsible for turning AI-generated code into maintainable, production-ready systems.
Google’s generous free-tier limits once made Gemini an ideal platform for experimenting with LLM-powered applications. When those limits quietly changed, a YouTube recommendation system that had worked reliably for months suddenly stopped functioning. This post recounts that experience and explores what it reveals about the hidden risks of building on third-party AI platforms.
After four years on DigitalOcean, I explored cheaper VPS alternatives and evaluated OVH as a potential replacement. Through load testing and hardware analysis, OVH proved faster, more scalable, and better equipped to handle traffic thanks to more CPU cores, additional RAM, and efficient PHP process management. This comparison highlights the importance of assessing both hardware and software configuration when choosing a hosting provider.
I am not a web designer by training, yet web design has been a recurring part of my work for nearly two decades. Building single page applications means dealing with HTML, CSS, and layout decisions whether one enjoys it or not. For a long time, I approached web design pragmatically, learning just enough to make things work, while never feeling fully in control of the result. Recently, that changed.
I’ve spent the past few days reading 𝘛𝘩𝘦 𝘖𝘯𝘦-𝘏𝘶𝘯𝘥𝘳𝘦𝘥-𝘗𝘢𝘨𝘦 𝘔𝘢𝘤𝘩𝘪𝘯𝘦 𝘓𝘦𝘢𝘳𝘯𝘪𝘯𝘨 Book by Andriy Burkov, and I’m genuinely glad I did.
This is one of those rare books that manages to strike what feels like a 𝗽𝗲𝗿𝗳𝗲𝗰𝘁 𝗯𝗮𝗹𝗮𝗻𝗰𝗲: it covers a remarkably wide range of machine learning algorithms and techniques in a very short space, explains them in a clear and engaging way, and yet 𝗻𝗲𝘃𝗲𝗿 𝘀𝗮𝗰𝗿𝗶𝗳𝗶𝗰𝗲𝘀 𝗿𝗶𝗴𝗼𝗿. The mathematical foundations are always there — equations included — but introduced only when they are truly needed.
I’ve recently been looking for a dataset to evaluate some TSP algorithms. Naturally, the TSPLIB benchmark came to mind. However, one quickly realizes that the TSP instances in this library are stored in a proprietary format. There’s an XML port of these files, but that format has long been superseded by JSON.
The term “filter bubble,” coined by Eli Pariser, captures how social platforms can trap us in narrow content silos. YouTube is no exception: its algorithm often serves up videos from the same handful of channels we already watch—boosting our dwell-time and, ultimately, the platform’s ad revenue.
In this article, we’ll walk through the implementation of a one-dimensional, binary-state cellular automaton built on top of the parametric CA framework I introduced in a previous post. The single-page app is split into two pieces: the cellular-automaton engine itself, and a graphical user interface (GUI). The GUI includes two PixiJS renderers—one for the rule and one for the grid.