Fundamentals of algorithm analysis:
correctness and asymptotic complexity
Divide-and-conquer design pattern and how to solve recurrences
Amortized analysis
Probabilistic analysis and randomized algorithms
Linear sorting and selection algorithms
Greedy and dynamic programming design patterns
Introduction to Computational Geometry
Fundamentals of NP-completeness
Introduction to approximation algorithms
Applications and some classical algorithms (e.g. graph algorithms, geometric algorithms)
Lectures
11.10.2021
(by Zoom) Course introduction: goals, evaluation, a
glimpse on the syllabus; Motivation for studying algorithms, their
correctness and complexity. Examples of invariants for loops in
algorithms for finding the index of the first occurrence of the maximum of an array, under different assumptions on the problem instances.
Materials:
Check file CC4010_intro-2122.pdf in
Sigarra ("Documents" folder for CC4010).
Slides CLRS (773 pp), Introduction to Algorithms
6.046J/18.401J, 2001-2005, by Erik D. Demaine and Charles
E. Leiserson. (Part of Lecture 1)
(in classroom) Loop invariants and their role in proving
correctness of iterative algorithms. Examples: exercise sheet 1 -
problem 1.
Introduction to the analysis of time complexity. Definition of
costs for elementary operations (under the RAM model). Example:
expressions defining the runtime in the worst case and the best case
of the function in problem 1 of exercise sheet 1.
18.10.2021
Asymptotic complexity. Order of growth and Big-O, Big-Theta, Big-Omega
notations. Comparing orders of growth.
Selection sort: correctness and complexity; expressions defining the runtime in the worst case and the
best case of a function that computes the position of the maximum in a
segment of an array; quadratic time complexity of selection sort in
the best case and the worst case.
Materials:
Check file CC4010_2122_slides2.pdf in
Sigarra ("Documents" folder for CC4010).
Slides CLRS (773 pp), Introduction to Algorithms
6.046J/18.401J, 2001-2005, by Erik D. Demaine and Charles
E. Leiserson. (Part of Lecture 1)
(in classroom) Asymptotic analysis. Worst-case and best-case analysis. Order of growth and Big-O, Big-Theta, Big-Omega notations. Comparing orders of growth. Exercises about asymptotic notation and function growth ratio. Analysis of the asymptotic time complexity of selection sort and insertion sort.
25.10.2021
(by Zoom)
Mergesort. Proof that the recursion T(n) = 2T(n/2) + cn solves to T(n) in Θ(n log n), using the recursion tree, assuming n is a power of 2.
Proof that the worst case time complexity of any comparison-based
sorting algorithm (such as insertion sort, selection sort, mergesort,
heapsort, bubblesort, quicksort), that given a_1, ... a_n, only gets
information using questions like " a_i > a_j? ", following a binary
decision tree model, is Ω(n log n). Optimality of mergesort (in
the worst case).
29.10.2021
(in classroom)
Orders of growth: some exercises;
introduction to little-o and
little-omega notations.
Applications of sorting. Algorithms for computing the convex hull of n
points in the plane.
Graham scan algorithm - O(n log n) - based on a
rotational sweep; orientation tests (analysis of left-turns and
right-turns by cross product)
Idea of the divide
and conquer algorithm - O(n log n) - by Preparata and Hong. Computation of
lower tangents and upper tangents in O(n)
Jarvis' march (gift-wrapping) - an output sensitive
algorithm - O(nh), where h is the number of vertices of the convex hull
An incremental algorithm - O(n log n) - with a
preprocessing step to sort the set of points by increasing order of x-coordinate
Proof that the runtime (in the worst case) of any algorithm for computing the convex hull of n points in the plane is Ω (n log n), by reducing the problem of sorting a vector x of n integers to the problem determining the convex hull of the points (x_i, x_i^2), with 1 ≤ i ≤ n.
Materials:
Check file CC4010_2122_slides3.pdf in
Sigarra ("Documents" folder for CC4010).
(in classroom)
Recap of some of the methods presented in the previous lecture for
computing the convex hull, namely the incremental algorithm (without
the preprocessing sort step).
Binary search. Checking whether a point belongs to a convex polygon in
O(log n) by binary search.
Introduction to the master method for solving recurrences of the form T(n) = a T(n/b) +
f (n) , where a ≥ 1, b > 1, and f is asymptotically
positive. Analysis based on the recursion tree.
Materials:
Check file CC4010_2122_slides3.pdf in
Sigarra ("Documents" folder for CC4010).
(by Zoom)
The master theorem for solving recurrences of the form T(n) = a T(n/b) +
f (n) , where a ≥ 1, b > 1, and f is asymptotically positive.
Using the recursion tree method for solving some
divide-and-conquer recurrences or estimating bounds. Examples:
T(n)=T(3n/4)+T(n/4)+n and T(n)=T(n/2)+T(n/4)+1.
Proof that T(n) = T(a1 n)+T(a2
n)+...+T(ak n) + cn solves to T(n) in Θ(n)
if a1+a2+...+ak < 1, being c and
ai positive constants, for all i, based on the analysis of
the recursion tree.
Brief reference to a generalization of this result by
Akra and Bazzi, and its
application for solving T(n)=T(n/2)+T(n/4)+1.
Other applications of divide-and-conquer: Strassen's algorithm
for matrix
multiplication.
For the next class: please study multiplication of two
n-bit integers by the Karatsuba's algorithm.
Materials:
(in classroom)
Solution of some recurrences using the master theorem.
Applications of divide-and-conquer: Karatsuba's algorithm for multiplication of two
n-bit integers.
Quicksort: time complexity in the worst case and best
case, and for split ratios (r: 1-r), with 0 < r <
1 (analysis for = 1/2, and r= 1/10). Randomized quicksort. Expected running time of randomized quicksort.
Materials:
Slides CLRS, Introduction to Algorithms
6.046J/18.401J, 2001-2005, by Erik D. Demaine and Charles
E. Leiserson.
(Lecture 4).
(in classroom)
Order Statistics: how to select the ith smallest of n elements (the element
with rank i)? Examples: the cases i=1 (minimum element), i=n (maximum element) and i = n/2
(median). Worst-case running time
using merge sort. Quickselect: randomized divide-and-conquer
algorithm with O(n) expected running time.
A worst-case linear-time deterministic algorithm: Median of 5 medians.
Materials:
Slides CLRS, Introduction to Algorithms
6.046J/18.401J, 2001-2005, by Erik D. Demaine and Charles
E. Leiserson.
(Lecture 6).
(by Zoom)
Notion of amortized complexity (for a sequence of operations on a data
structure): aggregation method, accounting method and potential
method. Illustration of the methods: dynamic table (expansion of a
vector by a single position and by duplicating the
space); Binary Counter; stack (push, pop, multipop).
Materials:
Slides CLRS, Introduction to Algorithms
6.046J/18.401J, 2001-2005, by Erik D. Demaine and Charles
E. Leiserson.
(Lecture 13).
Slides by Margarida Mamede,
Complexidade
Amortizada (Capítulo VIII de Análise e Desenho de Algoritmos,
FCT/UNL, Ano Lectivo 2010/11, pág 179)
26.11.2021
(in classroom)
First written test (in room FC2 -101)
29.11.2021
(by Zoom)
Greedy strategies for solving optimization problems.
Applications. The coin changing problem: making an amount using the fewest number of coins;
proof of the correctness of the greedy strategy for the set of coin values
{1, 2, 5, 10, 20, 50, 100, 200}, assuming the number of coins is
infinite; observation that the greedy strategy can fail if the number of
coins is finite or for other sets of coins (eg, {1,300,1000}).
Finding a minimum (maximum) spanning tree (MST) of a connected,
undirected weighted graph: the algorithm of Prim.
Materials:
Slides CLRS, Introduction to Algorithms
6.046J/18.401J, 2001-2005, by Erik D. Demaine and Charles
E. Leiserson.
(Lecture 16).
(in classroom)
Minimum (or maximum) spanning tree of a weighted connected undirected graph: the algorithms
by Prim and Kruskal.
Introduction of the heap data structure for priority queues.
Heap structure. Notion of parent, left child and right child of a
node. Minimum heap and maximum heap. Function buildMaxHeap (or
buildMinHeap)
and MaxHeapify (MinHeapify). Proof that the largest subtree of a heap with
n elements has a maximum of 2n/3 elements. Recurrence that defines an
upper bound on the complexity of heapify(1).
Materials:
(by Zoom)
Priority queues implemented by binary heaps.
EXTRACT_MIN (EXTRACT_MAX) and DECREASE_KEY (INCREASE_KEY) functions.
Heapsort: comparison sort algorithm with optimal temporal and spatial
complexity ("in-place" algorithm).
Applications of greedy algorithms: Interval Scheduling problem (proof that the
"earliest finish first" strategy finds an optimal solution).
Unit task scheduling (analysis of an efficient implementation for
checking that the new task can be added to the set previously
selected).
Materials:
A.P.Tomás, Slides of CC2001 (graph algorithms and applications
of greedy strategies):
(in classroom)
The greedy algorithm for finding a maximum weight base of a
matroid. Applications: (1) Unit Task Scheduling (proof that it gives
rise to a weighted matroid). (2) The matroid defined by the set of columns
of a matrix (brief reference to an application to traffic count
planning for intersections). (3) The graphic matroid M(G) of graph
G=(V,E), where M(G) = (E, {F : F is a subset of E such that (V,F) is
acyclic}) and the relation to Kruskal algorithm (formal proof that
M(G) is a weighted matroid not covered in class).
Materials
A.P.Tomás, Slides of CC2001 (graph algorithms and applications
of greedy strategies): Algoritmos
Greedy
13.12.2021
(by Zoom)
Introduction to Dynamic Programming. Applications to combinatorial
optimization problems: optimal substructure and overlapping
subproblems; computing solutions using a bottom-up or top-down (with
memoization) strategy. Examples of applications of DP to different
problems: Fibonnacci sequence; Number Pyramid; Minimum Edit
Distance for different cost functions;
all-pairs
shortest paths problem in graphs (Floyd-Warshall algorithm)
Materials
(in classroom)
Examples of applications of Dynamic Programming: coin change (with
restrictions); counting paths in graphs.
03.01.2022
(by Zoom)
Fundamentals of NP-completeness. Computability and complexity: algorithms; decidability; existence of undecidable problems (example: the halting problem). Complexity of algorithms versus complexity of problems. Brief introduction of some problems: graph coloring (planar 3-color vs planar 4-color); longest path versus smallest path; vertex cover for graphs (bipartite graphs vs generic graphs); set cover.
Polynomial-time reduction of a problem X to another problem Y to
characterize the relative complexity of problems (Karp's reductions): X ≤pY;
Consequence: if there is a polynomial algorithm that solves Y then
there is a polynomial algorithm that solves X; if there is no
polynomial algorithm that solves X then there is no polynomial
algorithm that solves Y. Example: reduction of vertex cover to set cover. Computational complexity of problems: Classes P, NP and NPC (i.e., NP-complete problems). NP-hard problems. The "P = NP?" problem.
Examples of NP complete problems: SAT, 3-SAT, VERTEX COVER,
HAMILTONEAN CYCLE, INDEPENDENT SET, SET COVER.
Materials:
(by Zoom)
Copying with intractability. Introduction to approximation algortithms for NPO problems (optimization problems
whose decision version belongs to NP). Polynomial time approximation
algorithms with known approximation factor. Class APX (i.e, problems
that admit constant factor polynomial-time approximation
algorithms). Proof that MINIMUM VERTEX COVER belongs to APX
(2-approximation based on finding a
matching in G and based on LP rounding).
Proof that an NPO problem is NP-hard by a polynomial-time reduction from
a NP-complete (i.e., NPC) problem. Example: proof that TSP (the
traveling salesperson problem) is NP-hard by reduction from HAMILTONEAN_CYCLE in
undirected graphs.
Metric TSP (i.e., TSP whose distance function satisfies the triangular
inequality): 2-approximation algorithm; (3/2)-approximation algorithm
by Christofides; proof of correctness of these approximation
factors. Inapproximability (unless P = NP) of the generic TSP problem
by a polynomial-time algorithm for any polynomial-time computable
function α(n) (proof by reduction of HAMILTONEAN_CYCLE to TSP).
Materials:
Additional material for further reference (not mandatory):Approximation
Algorithms, book by Vijay V. Vazirani
10.01.2022
(by Zoom)
Greedy algorithm for SET COVER (minimum cardinality set cover): idea of the proof that it gives a O(log n)
approximation algorithm.
Introduction to guarding problems in polygons: proof by Fisk of a
result by Chvátal that n/3 are always sufficient and occasionally
necessary for guarding a simple polygon with n vertices. Minimization of the number of
guards, for any given instance: statement without proof that the
problem is NP-hard; idea of the
approximation algorithm by Ghosh (for guards in vertices), based on a
reduction from SET COVER.
Materials:
(in classroom)
Recap of the approximation algorithm by Ghosh.
Definition of PARTITION and statement (without proof) that it is NP-complete.
BIN PACKING: proof that it is NP-hard by reduction
from PARTITION;
proof that it belongs to APX by showing that the "first fit strategy"
yields a 2-approximation.
Brief mention that "first fit decreasing" strategy gives a 3/2-approximation.
Proof of the inapproximability of bin packing for any constant smaller
than 3/2, unless P=NP, by reduction from PARTITION.
Materials: