1754 lines
65 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to Build an Automated Amazon Price Tracking Tool in Python For Free\n",
"## That sends alerts to your phone and keeps price history"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## What Shall We Build in This Tutorial?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There is a lot to be said about the psychology of discounts. For example, buying a discounted item even though we don't need it isn't saving money at all. That's walking into the oldest trap sellers use to increase sales. However, there are legitimate cases where waiting for a price drop on items you actually need makes perfect sense.\n",
"\n",
"The challenge is that e-commerce websites run flash sales and temporary discounts constantly, but these deals often disappear as quickly as they appear. Missing these brief windows of opportunity can be frustrating.\n",
"\n",
"That's where automation comes in. In this guide, we'll build a Python application that monitors product prices across any e-commerce website and instantly notifies you when prices drop on items you're actually interested in. Here is a sneak peak of the app:\n",
"\n",
"![](images/sneak-peek.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The app looks pretty dull, doesn't it? Well, no worries because it is fully functional:\n",
"- It has a minimalistic UI to add or remove products from the tracker\n",
"- A simple dashboard to display price history for each product\n",
"- Controls for setting the price drop threshold in percentages\n",
"- A notification system that sends Discord alerts when a tracked item's price drops\n",
"- A scheduling system that updates the product prices on an interval you specify\n",
"- Runs for free for as long as you want\n",
"\n",
"Even though the title says \"Amazon price tracker\" (full disclosure: I was forced to write that for SEO purposes), the app will work for any e-commerce website you can imagine (except Ebay, for some reason). \n",
"\n",
"So, let's get started building this Amazon price tracker. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The Toolstack We Will Use"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The app's code will be written fully in Python and its libraries:\n",
"\n",
"- [Streamlit](streamlit.io) for the UI\n",
"- [Firecrawl](firecrawl.dev) for AI-based scraping of e-commerce websites\n",
"- [SQLAlchemy](https://www.sqlalchemy.org/) for database management\n",
"\n",
"Apart from Python, we will use these platforms:\n",
"\n",
"- Discord for notifications\n",
"- GitHub for hosting the app\n",
"- GitHub Actions for running the app on a schedule\n",
"- Supabase for hosting a free Postgres database instance"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building an Amazon Price Tracker App Step-by-step"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since this project involves multiple components working together, we'll take a top-down approach rather than building individual pieces first. This approach makes it easier to understand how everything fits together, since we'll introduce each tool only when it's needed. The benefits of this strategy will become clear as we progress through the tutorial."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 1: Setting up the environment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, let's create a dedicated environment on our machines to work on the project:\n",
"\n",
"```bash\n",
"mkdir automated-price-tracker\n",
"cd automated-price-tracker\n",
"python -m venv .venv\n",
"source .venv/bin/activate\n",
"```\n",
"\n",
"These commands create a working directory and activate a virtual environment. Next, create a new script called `ui.py` for designing the user interface with Streamlit.\n",
"\n",
"```bash\n",
"touch ui.py\n",
"```\n",
"\n",
"Then, install Streamlit:\n",
"\n",
"```bash\n",
"pip install streamlit\n",
"```\n",
"\n",
"Next, create a `requirements.txt` file and add Streamlit as the first dependency:\n",
"\n",
"```bash\n",
"touch requirements.txt\n",
"echo \"streamlit\" >> requirements.txt\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since the code will be hosted on GitHub, we need to initialize Git and create a `.gitignore` file:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```bash\n",
"git init\n",
"touch .gitignore\n",
"echo \".venv\" >> .gitignore # Add the virtual env folder\n",
"git commit -m \"Initial commit\"\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 2: Add a sidebar to the UI for product input"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's take a look at the final product one more time:\n",
"\n",
"![](images/sneak-peek.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It has two sections: the sidebar and the main dashboard. Since the first thing you do when launching this app is adding products, we will start building the sidebar first. Open `ui.py` and paste the following code:\n",
"\n",
"```python\n",
"import streamlit as st\n",
"\n",
"# Set up sidebar\n",
"with st.sidebar:\n",
" st.title(\"Add New Product\")\n",
" product_url = st.text_input(\"Product URL\")\n",
" add_button = st.button(\"Add Product\")\n",
"\n",
"# Main content\n",
"st.title(\"Price Tracker Dashboard\")\n",
"st.markdown(\"## Tracked Products\")\n",
"```\n",
"\n",
"The code snippet above sets up a basic Streamlit web application with two main sections. In the sidebar, it creates a form for adding new products with a text input field for the product URL and an \"Add Product\" button. The main content area contains a dashboard title and a section header for tracked products. The code uses Streamlit's `st.sidebar` context manager to create the sidebar layout and basic Streamlit components like `st.title`, `st.text_input`, and `st.button` to build the user interface elements.\n",
"\n",
"To see how this app looks like, run the following command:\n",
"\n",
"```bash\n",
"streamlit run ui.py\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let's add a commit to save our progress:\n",
"\n",
"```bash\n",
"git add .\n",
"git commit -m \"Add a sidebar to the basic UI\"\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 3: Add a feature to check if input URL is valid\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the next step, we want to add some restrictions to the input field like checking if the passed URL is valid. For this, create a new file called `utils.py` where we write additional utility functions for our app:\n",
"\n",
"```bash\n",
"touch utils.py\n",
"```\n",
"\n",
"Inside the script, paste following code:\n",
"\n",
"```bash\n",
"# utils.py\n",
"from urllib.parse import urlparse\n",
"import re\n",
"\n",
"\n",
"def is_valid_url(url: str) -> bool:\n",
" try:\n",
" # Parse the URL\n",
" result = urlparse(url)\n",
"\n",
" # Check if scheme and netloc are present\n",
" if not all([result.scheme, result.netloc]):\n",
" return False\n",
"\n",
" # Check if scheme is http or https\n",
" if result.scheme not in [\"http\", \"https\"]:\n",
" return False\n",
"\n",
" # Basic regex pattern for domain validation\n",
" domain_pattern = (\n",
" r\"^[a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(\\.[a-zA-Z]{2,})+$\"\n",
" )\n",
" if not re.match(domain_pattern, result.netloc):\n",
" return False\n",
"\n",
" return True\n",
"\n",
" except Exception:\n",
" return False\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above function `is_valid_url()` validates URLs by checking several criteria:\n",
"\n",
"1. It verifies the URL has both a scheme (`http`/`https`) and domain name\n",
"2. It ensures the scheme is specifically `http` or `https`\n",
"3. It validates the domain name format using regex to check for valid characters and TLD\n",
"4. It returns True only if all checks pass, False otherwise\n",
"\n",
"Let's use this function in our `ui.py` file. Here is the modified code:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"import streamlit as st\n",
"from utils import is_valid_url\n",
"\n",
"\n",
"# Set up sidebar\n",
"with st.sidebar:\n",
" st.title(\"Add New Product\")\n",
" product_url = st.text_input(\"Product URL\")\n",
" add_button = st.button(\"Add Product\")\n",
"\n",
" if add_button:\n",
" if not product_url:\n",
" st.error(\"Please enter a product URL\")\n",
" elif not is_valid_url(product_url):\n",
" st.error(\"Please enter a valid URL\")\n",
" else:\n",
" st.success(\"Product is now being tracked!\")\n",
"\n",
"# Main content\n",
"...\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here is what's new:\n",
"\n",
"1. We added URL validation using the `is_valid_url()` function from `utils.py`\n",
"2. When the button is clicked, we perform validation:\n",
" - Check if URL is empty\n",
" - Validate URL format using `is_valid_url()`\n",
"3. User feedback is provided through error/success messages:\n",
" - Error shown for empty URL\n",
" - Error shown for invalid URL format \n",
" - Success message when URL passes validation\n",
"\n",
"Rerun the Streamlit app again and see if our validation works. Then, return to your terminal to commit the changes we've made:\n",
"\n",
"```bash\n",
"git add .\n",
"git commit -m \"Add a feature to check URL validity\"\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 4: Scrape the input URL for product details"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When a valid URL is entered and the add button is clicked, we need to implement product scraping functionality instead of just showing a success message. The system should:\n",
"\n",
"1. Immediately scrape the product URL to extract key details:\n",
" - Product name\n",
" - Current price\n",
" - Main product image\n",
" - Brand name\n",
" - Other relevant attributes\n",
"\n",
"2. Store these details in a database to enable:\n",
" - Regular price monitoring\n",
" - Historical price tracking\n",
" - Price change alerts\n",
" - Product status updates\n",
"\n",
"For the scraper, we will use [Firecrawl](firecrawl.dev), an AI-based scraping API for extracting webpage data without HTML parsing. This solution provides several advantages:\n",
"\n",
"1. No website HTML code analysis required for element selection\n",
"2. Resilient to HTML structure changes through AI-based element detection\n",
"3. Universal compatibility with product webpages due to structure-agnostic approach \n",
"4. Reliable website blocker bypass via robust API infrastructure\n",
"\n",
"First, create a new file called `scraper.py`:\n",
"\n",
"```bash\n",
"touch scraper.py\n",
"```\n",
"\n",
"Then, install these three libraries:\n",
"\n",
"```bash\n",
"pip install firecrawl-py pydantic python-dotenv\n",
"echo \"firecrawl-py\\npydantic\\npython-dotenv\\n\" >> requirements.txt # Add them to dependencies\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`firecrawl-py` is the Python SDK for Firecrawl scraping engine, `pydantic` is a data validation library that helps enforce data types and structure through Python class definitions, and `python-dotenv` is a library that loads environment variables from a `.env` file into your Python application.\n",
"\n",
"With that said, head over to the Firecrawl website and [sign up for a free account](https://www.firecrawl.dev/) (the free plan will work fine). You will be given an API key, which you should copy. \n",
"\n",
"Then, create a `.env` file in your terminal and add the API key as an environment variable:\n",
"\n",
"```bash\n",
"touch .env\n",
"echo \"FIRECRAWL_API_KEY='YOUR-API-KEY-HERE' >> .env\"\n",
"echo \".env\" >> .gitignore # Ignore .env files in Git\n",
"```\n",
"\n",
"The `.env` file is used to securely store sensitive configuration values like API keys that shouldn't be committed to version control. By storing the Firecrawl API key in `.env` and adding it to `.gitignore`, we ensure it stays private while still being accessible to our application code. This is a security best practice to avoid exposing credentials in source control.\n",
"\n",
"Now, we can start writing the `scraper.py`:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"from firecrawl import FirecrawlApp\n",
"from pydantic import BaseModel, Field\n",
"from dotenv import load_dotenv\n",
"from datetime import datetime\n",
"\n",
"load_dotenv()\n",
"\n",
"app = FirecrawlApp()\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, `load_dotenv()` function reads the `.env` file you have in your working directory and loads the environment variables inside, including the Firecrawl API key. When you create an instance of `FirecrawlApp` class, the API key is automatically detected to establish a connection between your script and the scraping engine in the form of the `app` variable.\n",
"\n",
"Now, we create a Pydantic class (usually called a model) that defines the details we want to scrape from each product:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"class Product(BaseModel):\n",
" \"\"\"Schema for creating a new product\"\"\"\n",
"\n",
" url: str = Field(description=\"The URL of the product\")\n",
" name: str = Field(description=\"The product name/title\")\n",
" price: float = Field(description=\"The current price of the product\")\n",
" currency: str = Field(description=\"Currency code (USD, EUR, etc)\")\n",
" main_image_url: str = Field(description=\"The URL of the main image of the product\")\n",
"```\n",
"\n",
"Pydantic models may be completely new to you, so let's break down the `Product` model:\n",
"\n",
"- The `url` field stores the product page URL we want to track\n",
"- The `name` field stores the product title/name that will be scraped\n",
"- The `price` field stores the current price as a float number\n",
"- The `currency` field stores the 3-letter currency code (e.g. USD, EUR)\n",
"- The `main_image_url` field stores the URL of the product's main image\n",
"\n",
"Each field is typed and has a description that documents its purpose. The `Field` class from Pydantic allows us to add metadata like descriptions to each field. These descriptions are especially important for Firecrawl since it uses them to automatically locate the relevant HTML elements containing the data we want. \n",
"\n",
"Now, let's create a function to call the engine to scrape URL's based on the schema above:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"def scrape_product(url: str):\n",
" extracted_data = app.scrape_url(\n",
" url,\n",
" params={\n",
" \"formats\": [\"extract\"],\n",
" \"extract\": {\"schema\": Product.model_json_schema()},\n",
" },\n",
" )\n",
"\n",
" # Add the scraping date to the extracted data\n",
" extracted_data[\"extract\"][\"timestamp\"] = datetime.utcnow()\n",
"\n",
" return extracted_data[\"extract\"]\n",
"\n",
"\n",
"if __name__ == \"__main__\":\n",
" product = \"https://www.amazon.com/gp/product/B002U21ZZK/\"\n",
"\n",
" print(scrape_product(product))\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The code above defines a function called `scrape_product` that takes a URL as input and uses it to scrape product information. Here's how it works:\n",
"\n",
"The function calls `app.scrape_url` with two parameters:\n",
"1. The product URL to scrape\n",
"2. A params dictionary that configures the scraping:\n",
" - It specifies we want to use the \"extract\" format\n",
" - It provides our `Product` Pydantic model schema as the extraction template as a JSON object\n",
"\n",
"The scraper will attempt to find and extract data that matches our Product schema fields - the URL, name, price, currency, and image URL.\n",
"\n",
"The function returns just the \"extract\" portion of the scraped data, which contains the structured product information. `extract` returns a dictionary to which we add the date of the scraping as it will be important later on.\n",
"\n",
"Let's test the script by running it:\n",
"\n",
"```bash\n",
"python scraper.py\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You should get an output like this:\n",
"\n",
"```python\n",
"{\n",
" 'url': 'https://www.amazon.com/dp/B002U21ZZK', \n",
" 'name': 'MOVA Globe Earth with Clouds 4.5\"', \n",
" 'price': 212, \n",
" 'currency': 'USD', \n",
" 'main_image_url': 'https://m.media-amazon.com/images/I/41bQ3Y58y3L._AC_.jpg', \n",
" 'timestamp': '2024-12-05 13-20'\n",
"}\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The output shows that a [MOVA Globe](https://www.amazon.com/dp/B002U21ZZK) costs $212 USD on Amazon at the time of writing this article. You can test the script for any other website that contains the information we are looking (except Ebay):\n",
"\n",
"- Price\n",
"- Product name/title\n",
"- Main image URL\n",
"\n",
"One key advantage of using Firecrawl is that it returns data in a consistent dictionary format across all websites. Unlike HTML-based scrapers like BeautifulSoup or Scrapy which require custom code for each site and can break when website layouts change, Firecrawl uses AI to understand and extract the requested data fields regardless of the underlying HTML structure. \n",
"\n",
"Finish this step by committing the new changes to Git:\n",
"\n",
"```bash\n",
"git add .\n",
"git commit -m \"Implement a Firecrawl scraper for products\"\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 5: Storing new products in a PostgreSQL database"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we want to check product prices regularly, we need to have an online database. In this case, Postgres is the best option since it's reliable, scalable, and has great support for storing time-series data like price histories.\n",
"\n",
"There are many platforms for hosting Postgres instances but the one I find the easiest and fastest to set up is Supabase. So, please head over to [the Supabase website](https://supabase.com) and create your free account. During the sign-up process, you will be given a password, which you should save somewhere safe on your machine. \n",
"\n",
"\n",
"Then, in a few minutes, your free Postgres instance comes online. To connect to this instance, click on Home in the left sidebar and then, \"Connect\":\n",
"\n",
"![](images/supabase_connect.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You will be shown your database connection string with a placeholder for the password you copied. You should paste this string in your `.env` file with your password added to the `.env` file:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```bash\n",
"echo POSTGRES_URL=\"THE-SUPABASE-URL-STRING-WITH-YOUR-PASSWORD-ADDED\"\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, the easiest way to interact with this database is through SQLAlchemy. Let's install it:\n",
"\n",
"```bash\n",
"pip install \"sqlalchemy==2.0.35\" psycopg2-binary\n",
"echo \"psycopg2-binary\\nsqlalchemy==2.0.35\" >> requirements.txt\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> Note: [SQLAlchemy](https://sqlalchemy.org) is a Python SQL toolkit and Object-Relational Mapping (ORM) library that lets us interact with databases using Python code instead of raw SQL. For our price tracking project, it provides essential features like database connection management, schema definition through Python classes, and efficient querying capabilities. This makes it much easier to store and retrieve product information and price histories in our Postgres database.\n",
"\n",
"After the installation, create a new `database.py` file for storing database-related functions:\n",
"\n",
"```bash\n",
"touch database.py\n",
"```\n",
"\n",
"Let's populate this script:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"from sqlalchemy import create_engine, Column, String, Float, DateTime, ForeignKey\n",
"from sqlalchemy.orm import sessionmaker, relationship, declarative_base\n",
"from datetime import datetime\n",
"\n",
"Base = declarative_base()\n",
"\n",
"\n",
"class Product(Base):\n",
" __tablename__ = \"products\"\n",
"\n",
" url = Column(String, primary_key=True)\n",
" prices = relationship(\n",
" \"PriceHistory\", back_populates=\"product\", cascade=\"all, delete-orphan\"\n",
" )\n",
"\n",
"\n",
"class PriceHistory(Base):\n",
" __tablename__ = \"price_histories\"\n",
"\n",
" id = Column(String, primary_key=True)\n",
" product_url = Column(String, ForeignKey(\"products.url\"))\n",
" name = Column(String, nullable=False)\n",
" price = Column(Float, nullable=False)\n",
" currency = Column(String, nullable=False)\n",
" main_image_url = Column(String)\n",
" timestamp = Column(DateTime, nullable=False)\n",
" product = relationship(\"Product\", back_populates=\"prices\")\n",
"\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"The code above defines two SQLAlchemy models for our price tracking database:\n",
"\n",
"The Product model represents items we want to track, with the product URL as the primary key. It has a one-to-many relationship with price histories (which means each product in `products` can have multiple price history entry in `price_histories`).\n",
"\n",
"The `PriceHistory` model stores individual price points over time. Each record contains:\n",
"- A unique ID as primary key\n",
"- The product URL as a foreign key linking to the `Product`\n",
"- The product name\n",
"- The price value and currency\n",
"- The main product image URL\n",
"- A timestamp of when the price was recorded\n",
"\n",
"The relationship between `Product` and `PriceHistory` is bidirectional, allowing easy navigation between related records. The `cascade` setting ensures price histories are deleted when their product is deleted.\n",
"\n",
"These models provide the structure for storing and querying our price tracking data in a PostgreSQL database using SQLAlchemy's ORM capabilities."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we define a `Database` class with a singe `add_product` method:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"class Database:\n",
" def __init__(self, connection_string):\n",
" self.engine = create_engine(connection_string)\n",
" Base.metadata.create_all(self.engine)\n",
" self.Session = sessionmaker(bind=self.engine)\n",
"\n",
" def add_product(self, url):\n",
" session = self.Session()\n",
" try:\n",
" # Create the product entry\n",
" product = Product(url=url)\n",
" session.merge(product) # merge will update if exists, insert if not\n",
" session.commit()\n",
" finally:\n",
" session.close()\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"The `Database` class above provides core functionality for managing product data in our PostgreSQL database. It takes a connection string in its constructor to establish the database connection using SQLAlchemy.\n",
"\n",
"The `add_product` method allows us to store new product URLs in the database. It uses SQLAlchemy's `merge` functionality which intelligently handles both inserting new products and updating existing ones, preventing duplicate entries.\n",
"\n",
"The method carefully manages database sessions, ensuring proper resource cleanup by using `try`/`finally` blocks. This prevents resource leaks and maintains database connection stability."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's use this method inside the sidebar of our UI. Switch to `ui.py` and make the following adjustments:\n",
"\n",
"First, update the imports to load the Database class and initialize it:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"import os\n",
"import streamlit as st\n",
"\n",
"from utils import is_valid_url\n",
"from database import Database\n",
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv()\n",
"\n",
"with st.spinner(\"Loading database...\"):\n",
" db = Database(os.getenv(\"POSTGRES_URL\"))\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The code integrates the `Database` class into the Streamlit UI by importing required dependencies and establishing a database connection. The database URL is loaded securely from environment variables using `python-dotenv`. The `Database` class creates or updates the tables we specified in `database.py` after being initialized.\n",
"\n",
"The database initialization process is wrapped in a Streamlit spinner component to maintain responsiveness while establishing the connection. This provides visual feedback during the connection setup period, which typically requires a brief initialization time."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, in the sidebar code, we only need to add a single line of code to add the product to the database if the URL is valid:\n",
"\n",
"```python\n",
"# Set up sidebar\n",
"with st.sidebar:\n",
" st.title(\"Add New Product\")\n",
" product_url = st.text_input(\"Product URL\")\n",
" add_button = st.button(\"Add Product\")\n",
"\n",
" if add_button:\n",
" if not product_url:\n",
" st.error(\"Please enter a product URL\")\n",
" elif not is_valid_url(product_url):\n",
" st.error(\"Please enter a valid URL\")\n",
" else:\n",
" db.add_product(product_url) # This is the new line\n",
" st.success(\"Product is now being tracked!\")\n",
"```\n",
"\n",
"In the final `else` block that runs when the product URL is valid, we call the `add_product` method to store the product in the database.\n",
"\n",
"Let's commit everything:\n",
"\n",
"```bash\n",
"git add .\n",
"git commit -m \"Add a Postgres database integration for tracking product URLs\"\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 6: Storing price histories for new products"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, after the product is added to the `products` table, we want to add its details and its scraped price to the `price_histories` table. \n",
"\n",
"First, switch to `database.py` and add a new method for creating entries in the `PriceHistories` table:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"class Database:\n",
" ... # the rest of the class\n",
"\n",
" def add_price(self, product_data):\n",
" session = self.Session()\n",
" try:\n",
" price_history = PriceHistory(\n",
" id=f\"{product_data['url']}_{product_data['timestamp']}\",\n",
" product_url=product_data[\"url\"],\n",
" name=product_data[\"name\"],\n",
" price=product_data[\"price\"],\n",
" currency=product_data[\"currency\"],\n",
" main_image_url=product_data[\"main_image_url\"],\n",
" timestamp=product_data[\"timestamp\"],\n",
" )\n",
" session.add(price_history)\n",
" session.commit()\n",
" finally:\n",
" session.close()\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `add_price` method takes a dictionary containing product data (which is returned by our scraper) and creates a new entry in the `PriceHistory` table. The entry's ID is generated by combining the product URL with a timestamp. The method stores essential product information like name, price, currency, image URL, and the timestamp of when the price was recorded. It uses SQLAlchemy's session management to safely commit the new price history entry to the database.\n",
"\n",
"Now, we need to add this functionality to the sidebar as well. In `ui.py`, add a new import statement that loads the `scrape_product` function from `scraper.py`:\n",
"\n",
"```python\n",
"... # The rest of the imports\n",
"from scraper import scrape_product\n",
"```\n",
"\n",
"Then, update the `else` block in the sidebar again:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"with st.sidebar:\n",
" st.title(\"Add New Product\")\n",
" product_url = st.text_input(\"Product URL\")\n",
" add_button = st.button(\"Add Product\")\n",
"\n",
" if add_button:\n",
" if not product_url:\n",
" st.error(\"Please enter a product URL\")\n",
" elif not is_valid_url(product_url):\n",
" st.error(\"Please enter a valid URL\")\n",
" else:\n",
" db.add_product(product_url)\n",
" with st.spinner(\"Added product to database. Scraping product data...\"):\n",
" product_data = scrape_product(product_url)\n",
" db.add_price(product_data)\n",
" st.success(\"Product is now being tracked!\")\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now when a user enters a product URL and clicks the \"Add Product\" button, several things happen:\n",
"\n",
"1. The URL is validated to ensure it's not empty and is properly formatted.\n",
"2. If valid, the URL is added to the products table via `add_product()`.\n",
"3. The product page is scraped immediately to get current price data.\n",
"4. This initial price data is stored in the price history table via `add_price()`.\n",
"5. The user sees loading spinners and success messages throughout the process.\n",
"\n",
"This gives us a complete workflow for adding new products to track, including capturing their initial price point. The UI provides clear feedback at each step and handles errors gracefully.\n",
"\n",
"Check that everything is working the way we want it and then, commit the new changes:\n",
"\n",
"```bash\n",
"git add .\n",
"git commit -m \"Add a feature to track product prices after they are added\"\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 7: Displaying each product's price history in the main dashboard"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's take a look at the final product shown in the introduction once again:\n",
"\n",
"![](images/sneak-peek.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Apart from the sidebar, the main dashboard shows each product's price history visualized with a Plotly line plot where the X axis is the timestamp while the Y axis is the prices. Each line plot is wrapped in a Streamlit component that includes buttons for removing the product from the database or visiting its source URL. \n",
"\n",
"In this step, we will implement the plotting feature and leave the two buttons for a later section. First, add a new method to the `Database` class for retrieving the price history for each product:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"class Database:\n",
" ... # The rest of the code\n",
"\n",
" def get_price_history(self, url):\n",
" \"\"\"Get price history for a product\"\"\"\n",
" session = self.Session()\n",
" try:\n",
" return (\n",
" session.query(PriceHistory)\n",
" .filter(PriceHistory.product_url == url)\n",
" .order_by(PriceHistory.timestamp.desc())\n",
" .all()\n",
" )\n",
" finally:\n",
" session.close()\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The method queries the price histories table based on product URL, orders the rows in descending order (oldest first) and returns the results. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, add another method for retrieving all products from the `products` table:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"class Database:\n",
" ...\n",
" \n",
" def get_all_products(self):\n",
" session = self.Session()\n",
" try:\n",
" return session.query(Product).all()\n",
" finally:\n",
" session.close()\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The idea is that every time our Streamlit app is opened, the main dashboard queries all existing products from the database and render their price histories with line charts in dedicated components. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To create the line charts, we need Plotly and Pandas, so install them in your environment:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```bash\n",
"pip install pandas plotly\n",
"echo \"pandas\\nplotly\" >> requirements.txt\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Afterward, import them at the top of `ui.py` along with other existing imports:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"import pandas as pd\n",
"import plotly.express as px\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, switch to `ui.py` and paste the following snippet of code after the Main content section:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"# Main content\n",
"st.title(\"Price Tracker Dashboard\")\n",
"st.markdown(\"## Tracked Products\")\n",
"\n",
"# Get all products\n",
"products = db.get_all_products()\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, after the page title and subtitle is shown, we are retrieving all products from the database. Let's loop over them:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"# Create a card for each product\n",
"for product in products:\n",
" price_history = db.get_price_history(product.url)\n",
" if price_history:\n",
" # Create DataFrame for plotting\n",
" df = pd.DataFrame(\n",
" [\n",
" {\"timestamp\": ph.timestamp, \"price\": ph.price, \"name\": ph.name}\n",
" for ph in price_history\n",
" ]\n",
" )\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For each product, we get their price history with `db.get_price_history` and then, convert this data into a dataframe with three columns:\n",
"\n",
"- Timestamp\n",
"- Price\n",
"- Product name\n",
"\n",
"This makes plotting easier with Plotly. Next, we create a Streamlit expander component for each product:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"# Create a card for each product\n",
"for product in products:\n",
" price_history = db.get_price_history(product.url)\n",
" if price_history:\n",
" ...\n",
" # Create a card-like container for each product\n",
" with st.expander(df[\"name\"][0], expanded=False):\n",
" st.markdown(\"---\")\n",
" col1, col2 = st.columns([1, 3])\n",
"\n",
" with col1:\n",
" if price_history[0].main_image_url:\n",
" st.image(price_history[0].main_image_url, width=200)\n",
" st.metric(\n",
" label=\"Current Price\",\n",
" value=f\"{price_history[0].price} {price_history[0].currency}\",\n",
" )\n",
"```\n",
"\n",
"The expander shows the product name as its title and contains:\n",
"\n",
"1. A divider line\n",
"2. Two columns:\n",
" - Left column: Product image (if available) and current price metric\n",
" - Right column (shown in next section)\n",
"\n",
"The price is displayed using Streamlit's metric component which shows the current price and currency.\n",
"\n",
"Here is the rest of the code:\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
" ...\n",
" \n",
" with col2:\n",
" # Create price history plot\n",
" fig = px.line(\n",
" df,\n",
" x=\"timestamp\",\n",
" y=\"price\",\n",
" title=None,\n",
" )\n",
" fig.update_layout(\n",
" xaxis_title=None,\n",
" yaxis_title=\"Price ($)\",\n",
" showlegend=False,\n",
" margin=dict(l=0, r=0, t=0, b=0),\n",
" height=300,\n",
" )\n",
" fig.update_xaxes(tickformat=\"%Y-%m-%d %H:%M\", tickangle=45)\n",
" fig.update_yaxes(tickprefix=\"$\", tickformat=\".2f\")\n",
" st.plotly_chart(fig, use_container_width=True)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the right column, we create an interactive line plot using Plotly Express to visualize the price history over time. The plot shows price on the y-axis and timestamp on the x-axis. The layout is customized to remove the title, adjust axis labels and formatting, and optimize the display size. The timestamps are formatted to show date and time, with angled labels for better readability. Prices are displayed with 2 decimal places and a dollar sign prefix. The plot is rendered using Streamlit's `plotly_chart` component and automatically adjusts its width to fill the container.\n",
"\n",
"After this step, the UI must be fully functional and ready to track products. For example, here is what mine looks like after adding a couple of products:\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](images/finished.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"But notice how the price history chart doesn't show anything. That's because we haven't populated it by checking the product price in regular intervals. Let's do that in the next couple of steps. For now, commit the latest changes we've made:\n",
"\n",
"```bash\n",
"git add .\n",
"git commit -m \"Display product price histories for each product in the dashboard\"\n",
"```\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"------------\n",
"\n",
"Let's take a brief moment to summarize the steps we took so far and what's next. So far, we've built a Streamlit interface that allows users to add product URLs and displays their current prices and basic information. We've implemented the database schema, created functions to scrape product data, and designed a clean UI with price history visualization. The next step is to set up automated price checking to populate our history charts and enable proper price tracking over time.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 8: Adding new price entries for existing products"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we want to write a script that adds new price entries in the `price_histories` table for each product in `products` table. We call this script `check_prices.py`:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"import os\n",
"from database import Database\n",
"from dotenv import load_dotenv\n",
"from firecrawl import FirecrawlApp\n",
"from scraper import scrape_product\n",
"\n",
"load_dotenv()\n",
"\n",
"db = Database(os.getenv(\"POSTGRES_URL\"))\n",
"app = FirecrawlApp()\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At the top, we are importing the functions and packages and initializing the database and a Firecrawl app. Then, we define a simple `check_prices` function:\n",
"\n",
"```python\n",
"def check_prices():\n",
" products = db.get_all_products()\n",
"\n",
" for product in products:\n",
" # Retrieve updated product data\n",
" updated_product = scrape_product(product.url)\n",
"\n",
" # Add the price to the database\n",
" db.add_price(updated_product)\n",
" print(f\"Added new price entry for {updated_product['name']}\")\n",
"\n",
"\n",
"if __name__ == \"__main__\":\n",
" check_prices()\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the function body, we retrieve all products URLs, retrieve their new price data with `scrape_product` function from `scraper.py` and then, add a new price entry for the product with `db.add_price`. \n",
"\n",
"If you run the function once and refresh the Streamlit app, you must see a line chart appear for each product you are tracking:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](images/linechart.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's commit the changes in this step:\n",
"\n",
"```bash\n",
"git add .\n",
"git commit -m \"Add a script for checking prices of existing products\"\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 9: Check prices regularly with GitHub actions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate various software workflows directly from your GitHub repository. In our case, it's particularly useful because we can set up automated price checks to run the `check_prices.py` script at regular intervals (e.g., daily or hourly) without manual intervention. This ensures we consistently track price changes and maintain an up-to-date database of historical prices for our tracked products.\n",
"\n",
"So, the first step is creating a new GitHub repository for our project and pushing existing code to it:\n",
"\n",
"```bash\n",
"git remote add origin https://github.com/yourusername/price-tracker.git\n",
"git push origin main\n",
"```\n",
"\n",
"Then, return to your terminal and create this directory structure:\n",
"\n",
"```bash\n",
"mkdir -p .github/workflows\n",
"touch .github/workflows/check_prices.yml\n",
"```\n",
"\n",
"The first command creates a new directory structure `.github/workflows` using the `-p` flag to create parent directories if they don't exist.\n",
"\n",
"The second command creates an empty YAML file called `check_prices.yml` inside the workflows directory. GitHub Actions looks for workflow files in this specific location - any YAML files in the `.github/workflows` directory will be automatically detected and processed as workflow configurations. These YAML files define when and how your automated tasks should run, what environment they need, and what commands to execute. In our case, this file will contain instructions for GitHub Actions to periodically run our price checking script. Let's write it:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```yaml\n",
"name: Price Check\n",
"\n",
"on:\n",
" schedule:\n",
" # Runs every 3 minutes\n",
" - cron: \"*/3 * * * *\"\n",
" workflow_dispatch: # Allows manual triggering\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's break down this first part of the YAML file:\n",
"\n",
"The `name: Price Check` line gives our workflow a descriptive name that will appear in the GitHub Actions interface.\n",
"\n",
"The `on:` section defines when this workflow should be triggered. We've configured two triggers:\n",
"\n",
"1. A schedule using cron syntax `*/3 * * * *` which runs the workflow every 3 minutes. The five asterisks represent minute, hour, day of month, month, and day of week respectively. The `*/3` means \"every 3rd minute\". The 3-minute interval is for debugging purposes, we will need to choose a wider interval later on to respect the free limits of GitHub actions. \n",
"\n",
"2. `workflow_dispatch` enables manual triggering of the workflow through the GitHub Actions UI, which is useful for testing or running the check on-demand.\n",
"\n",
"Now, let's add the rest:\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```yaml\n",
"jobs:\n",
" check-prices:\n",
" runs-on: ubuntu-latest\n",
"\n",
" steps:\n",
" - name: Checkout code\n",
" uses: actions/checkout@v4\n",
"\n",
" - name: Set up Python\n",
" uses: actions/setup-python@v5\n",
" with:\n",
" python-version: \"3.10\"\n",
" cache: \"pip\"\n",
"\n",
" - name: Install dependencies\n",
" run: |\n",
" python -m pip install --upgrade pip\n",
" pip install -r automated_price_tracking/requirements.txt\n",
"\n",
" - name: Run price checker\n",
" env:\n",
" FIRECRAWL_API_KEY: ${{ secrets.FIRECRAWL_API_KEY }}\n",
" POSTGRES_URL: ${{ secrets.POSTGRES_URL }}\n",
" run: python automated_price_tracking/check_prices.py\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's break down this second part of the YAML file:\n",
"\n",
"The `jobs:` section defines the actual work to be performed. We have one job named `check-prices` that runs on an Ubuntu virtual machine (`runs-on: ubuntu-latest`).\n",
"\n",
"Under `steps:`, we define the sequence of actions:\n",
"\n",
"1. First, we checkout our repository code using the standard `actions/checkout@v4` action\n",
"\n",
"2. Then we set up Python 3.10 using `actions/setup-python@v5`, enabling pip caching to speed up dependency installation\n",
"\n",
"3. Next, we install our Python dependencies by upgrading `pip` and installing requirements from our `requirements.txt` file. At this point, it is essential that you were keeping a complete dependency file based on the installs we made in the project. \n",
"\n",
"4. Finally, we run our price checker script, providing two environment variables:\n",
" - `FIRECRAWL_API_KEY`: For accessing the web scraping service\n",
" - `POSTGRES_URL`: For connecting to our database\n",
"\n",
"Both variables must be stored in our GitHub repository as secrets for this workflow file to run without errors. So, navigate to the repository you've created for the project and open its Settings. Under \"Secrets and variables\" > \"Actions\", click on \"New repository secret\" button to add the environment variables we have in the `.env` file one-by-one. \n",
"\n",
"Then, return to your terminal, commit the changes and push:\n",
"\n",
"```bash\n",
"git add . \n",
"git commit -m \"Add a workflow to check prices regularly\"\n",
"git push origin main\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, navigate to your GitHub repository again and click on the \"Actions\" tab:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](images/actions.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From there, you can run the workflow manually (click \"Run workflow\" and refresh the page). If it is executed successfully, you can return to the Streamlit app and refresh to see the new price added to the chart."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 10: Setting up Discord for notifications"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we know our scheduling workflow works, the first order of business is setting a wider check interval in the workflow file. Even though our first workflow run was manually, the rest happen automatically."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```bash\n",
"on:\n",
" schedule:\n",
" # Runs every 6 hours\n",
" - cron: \"0 0,6,12,18 * * *\"\n",
" workflow_dispatch: # Allows manual triggering\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the workflow file, change the cron field to the syntax you see above, which runs the workflow at the first minute of 12am, 6am, 12pm and 6pm UTC. Then, commit and push the changes:\n",
"\n",
"```bash\n",
"git add .\n",
"git commit -m \"Set a wider check interval in the workflow file\"\n",
"git push origin main\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now comes the interesting part. Each time the workflow is run, we want to compare the current price of the product to its original price when we started tracking it. If the difference between these two prices is below a certain threshold like 5%, this means there is a discount happening for the product and we want to send a notification. \n",
"\n",
"The easiest way to set this up is by using Discord webhooks. So, if you haven't got one already, go to Discord.com and create a new account (optionally, download the desktop app as well). Then, log in to your account and you will find a \"Plus\" button in the bottom-left corner. Click on it to create your own Discord server:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](images/discord.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After pressing \"Plus\", choose \"Create my own\" and \"For me and my friends\". Then, give a new name to your server and you will be presented with an empty channel:\n",
"\n",
"![](images/new-server.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Right click on \"general\" and choose \"Edit channel\". Switch to the integrations tab and click on \"Create webhook\". Discord immediately generates a new webhook with a random name and you should copy its URL. \n",
"\n",
"![](images/webhook.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Webhooks are automated messages sent from apps to other apps in real-time. They work like a notification system - when something happens in one app, it automatically sends data to another app through a unique URL. In our case, we'll use Discord webhooks to automatically notify us when there's a price drop. Whenever our price tracking script detects a significant discount, it will send a message to our Discord channel through the webhook URL, ensuring we never miss a good deal.\n",
"\n",
"After copying the webhook URL, you should save it as environment variable to your `.env` file:\n",
"\n",
"```python\n",
"echo \"DISCORD_WEBHOOK_URL='THE-URL-YOU-COPIED'\" >> .env\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, create a new file called `notifications.py` and paste the following contents:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"from dotenv import load_dotenv\n",
"import os\n",
"import aiohttp\n",
"import asyncio\n",
"\n",
"load_dotenv()\n",
"\n",
"\n",
"async def send_price_alert(\n",
" product_name: str, old_price: float, new_price: float, url: str\n",
"):\n",
" \"\"\"Send a price drop alert to Discord\"\"\"\n",
" drop_percentage = ((old_price - new_price) / old_price) * 100\n",
"\n",
" message = {\n",
" \"embeds\": [\n",
" {\n",
" \"title\": \"Price Drop Alert! 🎉\",\n",
" \"description\": f\"**{product_name}**\\nPrice dropped by {drop_percentage:.1f}%!\\n\"\n",
" f\"Old price: ${old_price:.2f}\\n\"\n",
" f\"New price: ${new_price:.2f}\\n\"\n",
" f\"[View Product]({url})\",\n",
" \"color\": 3066993,\n",
" }\n",
" ]\n",
" }\n",
"\n",
" try:\n",
" async with aiohttp.ClientSession() as session:\n",
" await session.post(os.getenv(\"DISCORD_WEBHOOK_URL\"), json=message)\n",
" except Exception as e:\n",
" print(f\"Error sending Discord notification: {e}\")\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `send_price_alert` function above is responsible for sending price drop notifications to Discord using webhooks. Let's break down what's new:\n",
"\n",
"1. The function takes 4 parameters:\n",
" - `product_name`: The name of the product that dropped in price\n",
" - `old_price`: The previous price before the drop\n",
" - `new_price`: The current lower price\n",
" - `url`: Link to view the product\n",
"\n",
"2. It calculates the percentage drop in price using the formula: `((old_price - new_price) / old_price) * 100`\n",
"\n",
"3. The notification is formatted as a Discord embed - a rich message format that includes:\n",
" - A title with a celebration emoji\n",
" - A description showing the product name, price drop percentage, old and new prices\n",
" - A link to view the product\n",
" - A green color (3066993 in decimal)\n",
"\n",
"4. The message is sent asynchronously using `aiohttp` to post to the Discord webhook URL stored in the environment variables\n",
"\n",
"5. Error handling is included to catch and print any issues that occur during the HTTP request"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"This provides a clean way to notify users through Discord whenever we detect a price drop for tracked products.\n",
"\n",
"To check the notification system works, add this main block to the end of the script:\n",
"\n",
"```python\n",
"if __name__ == \"__main__\":\n",
" asyncio.run(send_price_alert(\"Test Product\", 100, 90, \"https://www.google.com\"))\n",
"```\n",
"\n",
"`asyncio.run()` is used here because `send_price_alert` is an async function that needs to be executed in an event loop. `asyncio.run()` creates and manages this event loop, allowing the async HTTP request to be made properly. Without it, we wouldn't be able to use the `await` keyword inside `send_price_alert`.\n",
"\n",
"\n",
"To run the script, install `aiohttp`:\n",
"\n",
"```python\n",
"pip install aiohttp\n",
"echo \"aiohttp\\n\" >> requirements.txt\n",
"python notifications.py\n",
"```\n",
"\n",
"If all is well, you should get a Discord message in your server that looks like this:\n",
"\n",
"![](images/alert.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's commit the changes we have again:\n",
"\n",
"```bash\n",
"git add .\n",
"git commit -m \"Set up Discord alert system\"\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 11: Sending Discord alerts when prices drop"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, the only step left is adding a price comparison logic to `check_prices.py`. In other words, we want to use the `send_price_alert` function if the new scraped price is lower than the original. This requires a revamped `check_prices.py` script:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"import os\n",
"import asyncio\n",
"from database import Database\n",
"from dotenv import load_dotenv\n",
"from firecrawl import FirecrawlApp\n",
"from scraper import scrape_product\n",
"from notifications import send_price_alert\n",
"\n",
"load_dotenv()\n",
"\n",
"db = Database(os.getenv(\"POSTGRES_URL\"))\n",
"app = FirecrawlApp()\n",
"\n",
"# Threshold percentage for price drop alerts (e.g., 5% = 0.05)\n",
"PRICE_DROP_THRESHOLD = 0.05\n",
"\n",
"\n",
"async def check_prices():\n",
" products = db.get_all_products()\n",
" product_urls = set(product.url for product in products)\n",
"\n",
" for product_url in product_urls:\n",
" # Get the price history\n",
" price_history = db.get_price_history(product_url)\n",
" if not price_history:\n",
" continue\n",
"\n",
" # Get the earliest recorded price\n",
" earliest_price = price_history[-1].price\n",
"\n",
" # Retrieve updated product data\n",
" updated_product = scrape_product(product_url)\n",
" current_price = updated_product[\"price\"]\n",
"\n",
" # Add the price to the database\n",
" db.add_price(updated_product)\n",
" print(f\"Added new price entry for {updated_product['name']}\")\n",
"\n",
" # Check if price dropped below threshold\n",
" if earliest_price > 0: # Avoid division by zero\n",
" price_drop = (earliest_price - current_price) / earliest_price\n",
" if price_drop >= PRICE_DROP_THRESHOLD:\n",
" await send_price_alert(\n",
" updated_product[\"name\"], earliest_price, current_price, product_url\n",
" )\n",
"\n",
"\n",
"if __name__ == \"__main__\":\n",
" asyncio.run(check_prices())\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's examine the key changes in this enhanced version of `check_prices.py`:\n",
"\n",
"1. New imports and setup\n",
" - Added `asyncio` for `async`/`await` support\n",
" - Imported `send_price_alert` from `notifications.py`\n",
" - Defined `PRICE_DROP_THRESHOLD = 0.05` (5% threshold for alerts)\n",
"\n",
"2. Async function conversion\n",
" - Converted `check_prices()` to async function\n",
" - Gets unique product URLs using set comprehension to avoid duplicates\n",
" \n",
"3. Price history analysis\n",
" - Retrieves full price history for each product\n",
" - Gets `earliest_price` from `history[-1]` (works because we ordered by timestamp DESC)\n",
" - Skips products with no price history using `continue`\n",
" \n",
"4. Price drop detection logic\n",
" - Calculates drop percentage: `(earliest_price - current_price) / earliest_price`\n",
" - Checks if drop exceeds 5% threshold\n",
" - Sends Discord alert if threshold exceeded using `await send_price_alert()`\n",
" \n",
"5. Async main block\n",
" - Uses `asyncio.run()` to execute async `check_prices()` in event loop\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When I tested this new version of the script, I immediately got an alert:\n",
"\n",
"![](images/new-alert.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, let's commit everything and push to GitHub so that our workflow is supercharged with our notification system:\n",
"\n",
"```bash\n",
"git add .\n",
"git commit -m \"Add notification system to price drops\"\n",
"git push origin main\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conclusion and Next Steps"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Congratulations for making it to the end of this extremely long tutorial! We've just covered how to implement an end-to-end Python project you can proudly showcase on your portfolio. We built a complete price tracking system that scrapes product data from e-commerce websites, stores it in a Postgres database, analyzes price histories, and sends automated Discord notifications when prices drop significantly. Along the way, we learned about web scraping with Firecrawl, database management with SQLAlchemy, asynchronous programming with asyncio, building interactive UIs with Streamlit, automating with GitHub actions and integrating external webhooks.\n",
"\n",
"However, the project is far from perfect. Since we took a top-down approach to building this app, our project code is scattered across multiple files and doesn't conform to programming best practices most of the time. For this reason, I've recreated the same project in a much more sophisticated matter with production-level features. [This new version on GitHub](https://github.com/BexTuychiev/automated-price-tracking) implements proper database session management, faster operations and overall smoother user experience. \n",
"\n",
"If you decide to stick with the basic version, you can find the full project code and the notebook from the official Firecrawl GitHub repository example projects. I also recommend that you deploy your Streamlit app to Streamlit Cloud so that you have a function app accessible everywhere you go. \n",
"\n",
"Here are some more guides from our blog if you are interested:\n",
"\n",
"- [How to Run Web Scrapers on Schedule](https://www.firecrawl.dev/blog/automated-web-scraping-free-2025)\n",
"- [More about using Firecrawl's `scrape_url` function](https://www.firecrawl.dev/blog/mastering-firecrawl-scrape-endpoint)\n",
"- [Scraping entire websites with Firecrawl in a single command - the /crawl endpoint](https://www.firecrawl.dev/blog/mastering-the-crawl-endpoint-in-firecrawl)\n",
"\n",
"Thank you for reading!"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.15"
}
},
"nbformat": 4,
"nbformat_minor": 2
}