Skip to main content
Home

Main navigation

  • Home
User account menu
  • Log in
By Skander, 19 April, 2025

Divine Connections: Building Promptheon, a GenAI Semantic Graph Generator of Ancient Gods

I- In the Beginning

Promptheon is an LLM-powered system for exploring and structuring knowledge about ancient deities from Wikipedia. The project begins by crawling the List of Deities category page, using a Gemini-based language model to classify portal links by cultural origin or divine role. Each portal is then crawled to extract linked pages, which are filtered by another LLM call to retain only deity-related entries.

Ancient promptheon
By Skander, 22 March, 2025

Machine Learning Mind Maps

This article presents a collection of mind maps that illustrate key machine learning (ML) concepts and algorithms. Where relevant, it highlights commonly used ML libraries, packages, classes, and functions—with a particular focus on components from the Scikit-learn library.

Machine learning
By Skander, 8 December, 2024

Thompson Sampling With Gaussian Distribution - A Stochastic Multi-armed Bandit

A Thompson Sampling algorithm using a Beta probability distribution was introduced in a previous post.

Gauss sampling pebbles
By Skander, 29 November, 2024

Stochastic Multi-armed Bandit - Thompson Sampling With Beta Distribution

We have previously explored two multi-armed bandit (MAB) strategies: Maximum Average Reward (MAR) and Upper Confidence Bound (UCB). Both approaches rely on the observed average reward to determine which arm to pull next, using a deterministic scoring mechanism for decision-making.

MAB - Thompson Sampling With Beta Distribution
By Skander, 15 November, 2024

The Exploration-Exploitation Balance: The Epsilon-Greedy Approach in Multi-Armed Bandits

In this article, I will explore the balance between exploration and exploitation, a key concept in reinforcement learning and optimization problems. To illustrate this, I will use the multi-armed bandit problem as an example. I will also explain how the epsilon-greedy strategy effectively manages this balance.

Exploration versus exploitation
By Skander, 12 November, 2024

Comparison of Three Multi-armed Bandit Strategies

I- Introduction

In a previous article, I introduced the design and implementation of a multi-armed bandit (MAB) framework. This framework was built to simplify the implementation of new MAB strategies and provide a structured approach for their analysis.

Comparison of multi-armed bandit strategies
By Skander, 8 November, 2024

Design and Implementation of A Unifying Framework For Multi-armed Bandit Solvers

In previous blog posts, we explored the multi-armed bandit (MAB) problem and discussed the Upper Confidence Bound (UCB) algorithm as one approach to solving it. Research literature has introduced multiple algorithms for tackling this problem, and there is always room for experimenting with new ideas. To facilitate the implementation and comparison of different algorithms, we introduce a framework for MAB solvers.

Multi-armed bandit framework
By Skander, 3 November, 2024

Analyzing the Upper Confidence Bound Algorithm

This article focuses on evaluating the implementation of the Upper Confidence Bound (UCB) algorithm discussed herein. The evaluation is conducted using a single dataset provided by Super Data Science.

Number of impressions for each ad over time.
  • More From Skander

My Apps

  • Collatz (Syracuse) Sequence Calculator / Visualizer
  • Erdős–Rényi Random Graph Generator / Analyzer
  • KMeans Animator
  • Language Family Explorer

New Articles

Divine Connections: Building Promptheon, a GenAI Semantic Graph Generator of Ancient Gods
Machine Learning Mind Maps
Thompson Sampling With Gaussian Distribution - A Stochastic Multi-armed Bandit
Stochastic Multi-armed Bandit - Thompson Sampling With Beta Distribution
The Exploration-Exploitation Balance: The Epsilon-Greedy Approach in Multi-Armed Bandits

Skander Kort