๐ŸŽฏ Course Overview

๐Ÿ“Š What You'll Learn

  • Image enhancement and restoration techniques
  • Spatial and frequency domain filtering
  • Image segmentation and analysis
  • Noise reduction and edge detection
  • Fourier Transform and its applications

๐Ÿง  Study Strategy

  • Understand the visual impact of each operation
  • Memorize key algorithms and their parameters
  • Practice identifying when to use each technique
  • Review mathematical foundations
  • Use visual memory aids and diagrams

๐Ÿ“ Test Preparation

  • Know when to apply each filtering technique
  • Understand histogram equalization steps
  • Memorize edge detection operators (Sobel, Prewitt, etc.)
  • Be able to explain frequency vs spatial domain
  • Understand segmentation algorithms

๐Ÿ”ฌ Lab Lessons

Lab 1: Histogram Equalization & Image Enhancement

๐ŸŽฏ Key Concepts

  • Histogram Equalization: Redistributes pixel intensities to enhance image contrast
  • Histogram Matching: Modifies image to match a specified histogram
  • Image Filtering: Processes image to improve quality or extract features
  • Edge Detection: Identifies boundaries in images

๐Ÿงฉ Memory Aid

Histogram Equalization Steps:

  1. Calculate histogram of original image
  2. Calculate cumulative distribution function (CDF)
  3. Normalize CDF to [0, 255]
  4. Map each pixel to new intensity value

Mnemonic: "H C C M" - "Have Calm Careful Meditation"

๐Ÿ“Š Step-by-Step Calculation Example

Example Image: 4x4 image with pixel values: [0, 1, 2, 3]

1 Calculate Histogram

Count occurrences of each intensity level (0-255)

h(v) = number of pixels with intensity v

Example: If we have values [10, 10, 20, 30], then h(10)=2, h(20)=1, h(30)=1

2 Calculate CDF (Cumulative Distribution Function)
CDF(v) = ฮฃ h(u) for u=0 to v

Example: CDF(0) = h(0), CDF(1) = h(0) + h(1), CDF(2) = h(0) + h(1) + h(2)

3 Normalize CDF
T(v) = round((CDF(v) - CDF_min) / (Mร—N - CDF_min)) ร— (L-1)

Where: Mร—N = image dimensions (total pixels), L = 256 (intensity levels)

4 Map Pixels

Apply transformation to each pixel:

new_pixel = T(old_pixel_value)

For each pixel with intensity r, new intensity is s = T(r)

๐Ÿ’ป Python/OpenCV Implementation

# Method 1: Using OpenCV built-in function import cv2 import numpy as np # Load image in grayscale img = cv2.imread('image.jpg', cv2.IMREAD_GRAYSCALE) # Apply histogram equalization equalized = cv2.equalizeHist(img) # Display results cv2.imshow('Original', img) cv2.imshow('Equalized', equalized) cv2.waitKey(0) # Method 2: Manual implementation def manual_histogram_equalization(image): # Step 1: Calculate histogram height, width = image.shape histogram = np.zeros(256) for i in range(height): for j in range(width): histogram[image[i, j]] += 1 # Step 2: Calculate CDF cdf = np.zeros(256) cdf[0] = histogram[0] for i in range(1, 256): cdf[i] = cdf[i-1] + histogram[i] # Step 3: Normalize CDF cdf_min = cdf[np.min(np.where(cdf > 0))] total_pixels = height * width cdf_normalized = ((cdf - cdf_min) / (total_pixels - cdf_min)) * 255 # Step 4: Map pixels equalized = np.round(cdf_normalized[image]).astype(np.uint8) return equalized # Apply manual equalization result = manual_histogram_equalization(img)

Lab 2: Noise & Spatial Filtering

๐ŸŽฏ Key Concepts

  • Gaussian Noise: Statistical noise with bell-shaped distribution
  • Salt & Pepper Noise: Random white/black pixels
  • Spatial Filters: Operate directly on pixel neighborhoods
    • Mean Filter: Smooths image, reduces noise
    • Median Filter: Removes salt & pepper noise
    • Gaussian Filter: Weighted average, preserves edges better
  • Edge Detection Operators:
    • Sobel: Detects both vertical and horizontal edges
    • Prewitt: Similar to Sobel but different weights
    • Laplacian: Second-order derivative, sensitive to details

๐Ÿงฉ Memory Aid

Filter Selection Guide:

  • ๐Ÿง‚ Salt & Pepper? โ†’ Use Median Filter (non-linear, removes outliers)
  • ๐Ÿ“Š Gaussian Noise? โ†’ Use Mean or Gaussian Filter (linear, averages)
  • ๐Ÿ” Edge Detection? โ†’ Use Sobel/Prewitt (first-order derivative)

Mnemonic: "S.A.G.E" - "Salt, Add Gaussian, Edges"

๐Ÿ“Š Convolution Calculation

g(x,y) = ฮฃ ฮฃ f(m,n) ร— h(x-m,y-n)

Where f = input image, h = kernel/filter, g = output image

1 Example: 3x3 Mean Filter Convolution
Kernel = [1/9, 1/9, 1/9] [1/9, 1/9, 1/9] [1/9, 1/9, 1/9]

Center pixel calculation:

New Value = (pixelโ‚ร—1/9) + (pixelโ‚‚ร—1/9) + ... + (pixelโ‚‰ร—1/9)

2 Sobel Operators
Gx = [-1, 0, 1] [-2, 0, 2] [-1, 0, 1]
Gy = [-1, -2, -1] [ 0, 0, 0] [ 1, 2, 1]

Gradient Magnitude: โˆš(Gxยฒ + Gyยฒ)

๐Ÿ’ป Python/OpenCV Implementation

# Apply noise to image import cv2 import numpy as np img = cv2.imread('image.jpg', cv2.IMREAD_GRAYSCALE) # Add Gaussian noise mean = 0 std = 25 gaussian_noise = np.random.normal(mean, std, img.shape) gaussian_img = cv2.add(img, gaussian_noise.astype(np.uint8)) # Add Salt & Pepper noise noise = np.random.random(img.shape) salt_pepper_img = img.copy() noise_density = 0.05 salt_pepper_img[noise < noise_density/2] = 0 salt_pepper_img[noise > 1 - noise_density/2] = 255 # Apply filters # 1. Mean Filter (for Gaussian noise) mean_filtered = cv2.blur(img, (3, 3)) # 2. Median Filter (for Salt & Pepper) median_filtered = cv2.medianBlur(salt_pepper_img, 3) # 3. Gaussian Filter gaussian_filtered = cv2.GaussianBlur(img, (3, 3), 0) # Edge Detection # Sobel Operator sobel_x = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=3) sobel_y = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=3) sobel_combined = np.sqrt(sobel_x**2 + sobel_y**2) # Prewitt Operator (manual) kernelx = np.array([[-1, 0, 1], [-1, 0, 1], [-1, 0, 1]]) kernely = np.array([[-1, -1, -1], [0, 0, 0], [1, 1, 1]]) prewitt_x = cv2.filter2D(img, -1, kernelx) prewitt_y = cv2.filter2D(img, -1, kernely) # Laplacian Operator laplacian = cv2.Laplacian(img, cv2.CV_64F)

Lab 3: Frequency Domain Filtering

๐ŸŽฏ Key Concepts

  • Fourier Transform: Converts image from spatial to frequency domain
  • Low-Pass Filter: Removes high frequencies (smooths image)
    • Removes noise and fine details
    • Can cause blurring
  • High-Pass Filter: Removes low frequencies (enhances edges)
    • Preserves edges and details
    • Removes slowly varying intensities
  • Notch Filter: Removes specific frequency components (removes periodic noise)
  • Ideal vs Gaussian vs Butterworth: Different filter types with different sharpness

๐Ÿงฉ Memory Aid

Frequency Domain Guide:

  • ๐ŸŒŠ High Frequency = Fine details, edges, noise
  • ๐Ÿ”๏ธ Low Frequency = Smooth areas, background
  • ๐Ÿ“‰ Low-Pass Filter = "LPF removes the HIGHs" โ†’ Smooth & Blur
  • ๐Ÿ“ˆ High-Pass Filter = "HPF removes the LOWs" โ†’ Sharpen & Detect

Mnemonic: "Lo-FI Smooths, Hi-FI Sharpens"

๐Ÿ“ Fourier Transform Formula

F(u,v) = ฮฃ ฮฃ f(x,y) ร— e^(-j2ฯ€(ux/M + vy/N))

Where: M,N are image dimensions, j = โˆš(-1)

1 Forward Fourier Transform

Convert from spatial to frequency domain

2 Apply Filter in Frequency Domain

G(u,v) = F(u,v) ร— H(u,v)

Where H(u,v) is the filter (low-pass, high-pass, etc.)

3 Inverse Fourier Transform
f(x,y) = (1/MN) ฮฃ ฮฃ F(u,v) ร— e^(j2ฯ€(ux/M + vy/N))

Convert back to spatial domain

4 Common Filters

Ideal Low-Pass: H(u,v) = 1 if โˆš(uยฒ+vยฒ) โ‰ค Dโ‚€, else 0

Ideal High-Pass: H(u,v) = 0 if โˆš(uยฒ+vยฒ) โ‰ค Dโ‚€, else 1

(Where Dโ‚€ is the cutoff frequency)

๐Ÿ’ป Python/OpenCV Implementation

import cv2 import numpy as np img = cv2.imread('image.jpg', cv2.IMREAD_GRAYSCALE) # Step 1: Compute Fourier Transform f_transform = np.fft.fft2(img) f_shift = np.fft.fftshift(f_transform) magnitude_spectrum = np.log(np.abs(f_shift) + 1) # Step 2: Create Low-Pass Filter rows, cols = img.shape crow, ccol = rows // 2, cols // 2 D0 = 30 # Cutoff frequency # Create mask mask = np.zeros((rows, cols), np.uint8) r = D0 center = [crow, ccol] x, y = np.ogrid[:rows, :cols] mask_area = (x - center[0])**2 + (y - center[1])**2 <= r*r mask[mask_area] = 1 # Apply Low-Pass Filter f_shift_filtered = f_shift * mask magnitude_spectrum_filtered = np.log(np.abs(f_shift_filtered) + 1) # Step 3: Inverse Fourier Transform f_ishift = np.fft.ifftshift(f_shift_filtered) img_back = np.fft.ifft2(f_ishift) img_back = np.abs(img_back) # High-Pass Filter (inverse of low-pass) mask_hp = 1 - mask f_shift_hp = f_shift * mask_hp img_back_hp = np.abs(np.fft.ifft2(np.fft.iftshift(f_shift_hp))) # Display results cv2.imshow('Original', img) cv2.imshow('Low-Pass Filtered', img_back.astype(np.uint8)) cv2.imshow('High-Pass Filtered', img_back_hp.astype(np.uint8))

Lab 4: Image Segmentation

๐ŸŽฏ Key Concepts

  • Thresholding:
    • Global: Single threshold for entire image
    • Local (Adaptive): Different threshold for different regions
    • Otsu's Method: Automatically finds optimal threshold
  • Region Growing: Seeds grow based on similarity criteria
  • K-Means Clustering: Partitions image into K clusters based on pixel intensities
  • Clustering: Groups similar pixels together

๐Ÿงฉ Memory Aid

Segmentation Techniques:

  1. ๐ŸŒก๏ธ Simple threshold? โ†’ Global Thresholding
  2. ๐ŸŒก๏ธ๐ŸŒก๏ธ Varying illumination? โ†’ Adaptive (Local) Thresholding
  3. ๐Ÿค– Let computer decide? โ†’ Otsu's Method
  4. ๐ŸŒฑ Grow from seeds? โ†’ Region Growing
  5. ๐ŸŽฏ Group into clusters? โ†’ K-Means

Mnemonic: "T.O.R.K" - "Threshold, Otsu, Region, K-means"

๐Ÿ“Š Otsu's Method Calculation

1 Calculate Histogram

Compute normalized histogram: p(i) = h(i) / (Mร—N)

2 For Each Threshold t (0-255)
wโ‚€(t) = ฮฃ p(i) for i=0 to t # Weight of background wโ‚(t) = ฮฃ p(i) for i=t+1 to 255 # Weight of foreground
3 Calculate Class Means
ฮผโ‚€(t) = ฮฃ iร—p(i) / wโ‚€(t) for i=0 to t ฮผโ‚(t) = ฮฃ iร—p(i) / wโ‚(t) for i=t+1 to 255
4 Calculate Between-Class Variance
ฯƒยฒ(t) = wโ‚€(t) ร— wโ‚(t) ร— (ฮผโ‚€(t) - ฮผโ‚(t))ยฒ
5 Find Optimal Threshold

Threshold t that maximizes ฯƒยฒ(t)

๐Ÿ“ K-Means Clustering Algorithm
  1. Initialize K cluster centers randomly
  2. Assign each pixel to nearest cluster
  3. Update cluster centers (mean of assigned pixels)
  4. Repeat steps 2-3 until convergence

๐Ÿ’ป Python/OpenCV Implementation

import cv2 import numpy as np img = cv2.imread('image.jpg', cv2.IMREAD_GRAYSCALE) # 1. Global Thresholding _, global_thresh = cv2.threshold(img, 127, 255, cv2.THRESH_BINARY) # 2. Otsu's Thresholding (automatic) _, otsu_thresh = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) # 3. Adaptive Thresholding adaptive_thresh = cv2.adaptiveThreshold( img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2 ) # 4. K-Means Clustering # Reshape image to feature vector data = img.reshape((-1, 1)) data = np.float32(data) # Define criteria and apply K-means criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0) K = 3 # Number of clusters _, labels, centers = cv2.kmeans(data, K, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS) # Convert back to 8-bit values centers = np.uint8(centers) segmented_data = centers[labels.flatten()] segmented_img = segmented_data.reshape(img.shape) # 5. Region Growing (manual implementation) def region_growing(image, seed, threshold): height, width = image.shape segmented = np.zeros_like(image) visited = np.zeros_like(image) import queue q = queue.Queue() q.put(seed) visited[seed] = 1 while not q.empty(): x, y = q.get() segmented[image[seed] - threshold <= image[x, y] <= image[seed] + threshold] = 255 for dx in [-1, 0, 1]: for dy in [-1, 0, 1]: nx, ny = x + dx, y + dy if 0 <= nx < height and 0 <= ny < width and not visited[nx, ny]: q.put((nx, ny)) visited[nx, ny] = 1 return segmented # Apply region growing seed_point = (100, 100) segmented = region_growing(img, seed_point, 20)

๐Ÿ“– Core Topics - Detailed Study Notes

๐Ÿ“˜ Topic 1: Digital Image Fundamentals

๐ŸŽฏ Learning Objectives

  • Understand what a digital image is and its representation
  • Learn about image resolution, pixel values, and coordinate systems
  • Distinguish between grayscale and color images
  • Comprehend image acquisition and sampling concepts

โœจ Key Points & Highlights

  • Pixel (Picture Element): Smallest unit of a digital image, represents intensity/color
  • Image Resolution: Determined by width ร— height (e.g., 1920ร—1080)
  • Bit Depth: Number of bits per pixel (8-bit = 256 intensity levels)
  • Coordinate System: Origin at top-left, x increases right, y increases down
  • Grayscale Images: Single channel, values from 0 (black) to 255 (white)
  • Color Images: Multiple channels (RGB = Red, Green, Blue)
  • Sampling: Converting continuous signal to discrete pixels
  • Quantization: Converting continuous intensity to discrete levels

โš ๏ธ Important Points

  • Higher resolution = More pixels = More detail = Larger file size
  • 8-bit grayscale = 2^8 = 256 possible intensity values (0-255)
  • 24-bit color = 8 bits per channel ร— 3 channels = 16.7 million colors
  • Always remember: Image coordinates start from (0,0) at TOP-LEFT
  • Neighborhood operations use pixel and its surrounding pixels
  • Point operations modify pixel based ONLY on its own value

๐Ÿ“ Summary

Topic 1 establishes the foundation of digital image processing. You must understand that images are discrete arrays of pixels, each with specific coordinates and intensity values. The key distinction is between point operations (working on individual pixels) and neighborhood operations (working on pixel neighborhoods). Image acquisition involves sampling (spatial discretization) and quantization (intensity discretization). Master these basics before moving to advanced topics!

๐Ÿ“— Topic 2: Image Enhancement & Point Processing

๐ŸŽฏ Learning Objectives

  • Master point processing operations (affect each pixel independently)
  • Understand histogram analysis and manipulation
  • Learn histogram equalization for contrast enhancement
  • Apply image negation, gamma correction, and intensity transformations

โœจ Key Points & Highlights

  • Histogram: Graph of pixel intensity distribution
  • Contrast: Difference between darkest and brightest regions
  • Histogram Equalization: Redistributes intensities to maximize contrast
    • Steps: Calculate histogram โ†’ Compute CDF โ†’ Normalize โ†’ Map pixels
    • Formula: T(v) = round((CDF(v) - CDF_min) / (Mร—N - CDF_min)) ร— 255
  • Image Negation: s = 255 - r (inverts intensities)
  • Gamma Correction: s = c ร— r^ฮณ (corrects brightness perception)
    • ฮณ < 1: Brightens dark areas
    • ฮณ > 1: Darkens bright areas
  • Clipping: Values below 0 set to 0, above 255 set to 255

โš ๏ธ Important Points

  • Point operations = NO neighbor pixels involved, each pixel processed independently
  • Histogram equalization is AUTOMATIC (no parameters needed)
  • Equalization maximizes contrast but doesn't guarantee better visual quality always
  • CDF is always monotonically increasing
  • After any operation, use clipping to keep values in [0, 255] range
  • Gamma correction is used in display devices (monitors, TVs)
  • Linear transformations: s = a ร— r + b (scales and shifts)
  • Non-linear: Log, exponential, power law transformations

๐Ÿ“ Summary

Topic 2 focuses on enhancing images through point operations. The histogram is your most important tool for analyzing image quality. Histogram equalization is crucial - understand its 4 steps and the transformation formula. Point operations are fast because they don't consider neighbors. Master when to use different transformations: negation for medical images, gamma for display correction, equalization for low-contrast images. These operations form the foundation for understanding neighborhood processing!

๐Ÿ“™ Topic 3: Spatial Filtering & Edge Detection

๐ŸŽฏ Learning Objectives

  • Understand spatial domain filtering and convolution
  • Master noise reduction filters (mean, median, Gaussian)
  • Learn edge detection operators (Sobel, Prewitt, Laplacian)
  • Comprehend first-order vs second-order derivatives

โœจ Key Points & Highlights

  • Spatial Filtering: Apply kernel/filter to each pixel neighborhood
    • Convolution formula: g(x,y) = ฮฃ ฮฃ f(m,n) ร— h(x-m,y-n)
    • Kernel = filter matrix (3ร—3, 5ร—5, etc.)
  • Smoothing Filters:
    • Mean Filter: Averaging, reduces noise but blurs edges
    • Median Filter: Non-linear, removes salt & pepper noise
    • Gaussian Filter: Weighted averaging, preserves edges better
  • Edge Detection:
    • Sobel (1st order): Gx = [-1 0 1; -2 0 2; -1 0 1], Gy = [-1 -2 -1; 0 0 0; 1 2 1]
    • Prewitt (1st order): Similar to Sobel, different weights
    • Laplacian (2nd order): Detects zero crossings, very sensitive
  • First-Order Derivatives: Detect gradients (edges = high gradient)
  • Second-Order Derivatives: Detect zero crossings (edges = sign change)

โš ๏ธ Important Points

  • Median filter is BEST for salt & pepper noise (removes outliers)
  • Mean filter is BEST for Gaussian noise (averages out noise)
  • 3ร—3 kernel is most common, but sizes can vary (5ร—5, 7ร—7)
  • Gaussian filter parameters: ฯƒ (standard deviation) controls blur amount
  • Larger kernel = more smoothing = more blurring
  • Sobel operators detect BOTH horizontal and vertical edges
  • Edge magnitude = โˆš(Gxยฒ + Gyยฒ)
  • Unsharp masking = original - blurred version
  • All filters process pixel AND its neighbors

๐Ÿ“Š Filter Selection Guide

Noise Type Best Filter Why?
Salt & Pepper Median Filter Non-linear, removes outliers
Gaussian Mean or Gaussian Linear, averages the noise
Impulse Median Filter Removes sudden spikes
Edge Detection Sobel/Prewitt First-order derivative

๐Ÿ“ Summary

Topic 3 introduces neighborhood operations through spatial filtering. Convolution is the mathematical foundation - understand the kernel placement and multiplication. Smoothing filters reduce noise but blur edges; choose median for salt & pepper, mean/Gaussian for Gaussian noise. Edge detection uses derivatives: Sobel/Prewitt (first-order) detect gradients, Laplacian (second-order) detects zero crossings. Master the filter selection guide - this is test-critical! Remember: point operations = pixel alone, spatial filtering = pixel + neighbors.

๐Ÿ“• Topic 4: Frequency Domain & Advanced Techniques

๐ŸŽฏ Learning Objectives

  • Understand frequency domain representation via Fourier Transform
  • Master frequency domain filtering (low-pass, high-pass, band-pass)
  • Learn image segmentation techniques
  • Comprehend mathematical morphology and advanced operations

โœจ Key Points & Highlights

  • Fourier Transform:
    • Converts spatial domain to frequency domain
    • Formula: F(u,v) = ฮฃ ฮฃ f(x,y) ร— e^(-j2ฯ€(ux/M + vy/N))
    • Low frequencies = smooth areas, background
    • High frequencies = edges, details, noise
  • Frequency Domain Filters:
    • Low-Pass Filter (LPF): Removes HIGH frequencies โ†’ Smooths/blurs
    • High-Pass Filter (HPF): Removes LOW frequencies โ†’ Sharpens/enhances edges
    • Notch Filter: Removes specific frequency components
  • Filter Types:
    • Ideal: Sharp cutoff (brick wall), causes ringing
    • Gaussian: Smooth transition, no ringing
    • Butterworth: Between ideal and Gaussian
  • Segmentation:
    • Thresholding: Global (single value), Adaptive (local)
    • Otsu's Method: Automatic threshold selection
    • Region Growing: Start from seeds, grow based on similarity
    • K-Means Clustering: Partition into K clusters
  • Mathematical Morphology:
    • Erosion: Shrinks objects, removes small details
    • Dilation: Grows objects, fills holes
    • Opening: Erosion followed by dilation
    • Closing: Dilation followed by erosion

โš ๏ธ Important Points

  • LPF removes HIGHs โ†’ Smooth (opposite of what name suggests!)
  • HPF removes LOWs โ†’ Sharpens (enhances edges)
  • FFT = Fast Fourier Transform (efficient computation)
  • Center of frequency spectrum contains DC component (average brightness)
  • Otsu's method maximizes between-class variance
  • Formula: ฯƒยฒ(t) = wโ‚€(t) ร— wโ‚(t) ร— (ฮผโ‚€(t) - ฮผโ‚(t))ยฒ
  • Adaptive thresholding needed when illumination varies across image
  • Morphological operations require structuring element (kernel)
  • Dual nature: Dilation shrinks background, Erosion shrinks foreground
  • Opening removes noise, Closing fills small holes

๐Ÿ“Š Frequency vs Spatial Domain

Aspect Spatial Domain Frequency Domain
Representation Pixel coordinates (x,y) Frequency components (u,v)
Filtering Convolution (slow for large kernels) Multiplication (fast)
Best for Local operations, small filters Global operations, large filters
Interpretation Intuitive (visual) Mathematical (spectral)

๐Ÿ“ Summary

Topic 4 is the most advanced, covering frequency domain and segmentation. Frequency domain filtering is powerful for global operations - remember: LPF smooths (removes HIGHs), HPF sharpens (removes LOWs). The Fourier Transform converts between domains. Segmentation is crucial: use global thresholding for simple cases, adaptive for varying illumination, Otsu's for automatic selection, and K-means for clustering. Mathematical morphology (erosion, dilation, opening, closing) is essential for shape analysis. Master the frequency-spatial domain table and filter selection guide. This topic combines everything from Topics 1-3!

๐Ÿ“š Course Materials & Sources

Click on any source below to view the original course material

๐Ÿ“Œ Quick Reference Links

โœ“ Study Checklist ๐Ÿ“– User Guide

๐Ÿ’ก How to Use These Sources

  • Click any source to open the original course material in a new tab
  • Use alongside this website for detailed reference while studying
  • PDFs may require Adobe Reader or your browser's built-in PDF viewer
  • PPTX/DOCX files can be opened in Microsoft Office, Google Slides, or LibreOffice
  • Compare information between sources and this study guide for comprehensive understanding

๐Ÿ“š Quick Reference Glossary

Histogram

Graph showing distribution of pixel intensities in an image

CDF (Cumulative Distribution Function)

Cumulative sum of histogram values, used in equalization

Spatial Domain

Image plane where operations work directly on pixel coordinates

Frequency Domain

Representation where image is expressed in terms of frequencies

Convolution

Mathematical operation for applying filters to images

Kernel/Filter

Small matrix used to process images (e.g., 3x3, 5x5)

Edge

Boundary between regions with different intensities

Noise

Random variation in pixel values that obscures information

Segmentation

Process of dividing image into meaningful regions

Threshold

Value used to separate pixels into different groups

โœ๏ธ Practice Questions

Question 1: Histogram Equalization

What are the 4 main steps in histogram equalization?

Question 2: Filter Selection

Which filter would you use to remove salt & pepper noise? Why?

Question 3: Frequency Domain

What's the difference between low-pass and high-pass filters?

Question 4: Segmentation

When would you use adaptive thresholding instead of global thresholding?

Question 5: Edge Detection

Name three edge detection operators and their order of derivative.

โšก Quick Review Cheat Sheet

๐ŸŽฏ Test Day Essentials

Histogram Equalization

4 steps: H โ†’ C โ†’ C โ†’ M

Salt & Pepper Noise

Median Filter (non-linear)

Gaussian Noise

Mean/Gaussian Filter (linear)

Low-Pass Filter

Removes HIGHs โ†’ Smooths

High-Pass Filter

Removes LOWs โ†’ Sharpens

Edge Detection

Sobel/Prewitt (1st order)

Otsu's Method

Automatic threshold selection

Region Growing

Grows from seed points

K-Means

Clusters pixels into K groups