The term “filter bubble,” coined by Eli Pariser, captures how social platforms can trap us in narrow content silos. YouTube is no exception: its algorithm often serves up videos from the same handful of channels we already watch—boosting our dwell-time and, ultimately, the platform’s ad revenue.
In this article, we’ll walk through the implementation of a one-dimensional, binary-state cellular automaton built on top of the parametric CA framework I introduced in a previous post. The single-page app is split into two pieces: the cellular-automaton engine itself, and a graphical user interface (GUI). The GUI includes two PixiJS renderers—one for the rule and one for the grid.
Cellular automata are mathematical models built on a finite population of cells. Each cell holds a state selected from a predefined set—finite for discrete automata and potentially continuous for analog variants. In the classic binary case, every cell is either ALIVE or DEAD. At each time step the system advances in lockstep: the new state of every cell is computed by applying a simple local rule that considers the current state of the cell and those of its neighbors. Repeated iterations of this rule generate successive “generations,” allowing intricate global patterns to emerge from straightforward local interactions.
Promptheon is an LLM-powered system for exploring and structuring knowledge about ancient deities from Wikipedia. The project begins by crawling the List of Deities category page, using a Gemini-based language model to classify portal links by cultural origin or divine role.
This article presents a collection of mind maps that illustrate key machine learning (ML) concepts and algorithms. Where relevant, it highlights commonly used ML libraries, packages, classes, and functions—with a particular focus on components from the Scikit-learn library.
A Thompson Sampling algorithm using a Beta probability distribution was introduced in a previous post. The Beta distribution is well-suited for binary multi-armed bandits (MABs), where arm rewards are restricted to values of 0 or 1.
In this article, we introduce an alternative MAB sampling algorithm designed for the more general case where arm rewards are continuous: Thompson Sampling with a Gaussian Distribution (TSG).
We have previously explored two multi-armed bandit (MAB) strategies: Maximum Average Reward (MAR) and Upper Confidence Bound (UCB). Both approaches rely on the observed average reward to determine which arm to pull next, using a deterministic scoring mechanism for decision-making.
In this article, I will explore the balance between exploration and exploitation, a key concept in reinforcement learning and optimization problems. To illustrate this, I will use the multi-armed bandit problem as an example. I will also explain how the epsilon-greedy strategy effectively manages this balance.