Big O, Introduction to Graph Theory
László Papp
BME
2022. 02. 21.
The bigO notation Definition
Letf(n)andg(n)be non negative functions over the positive integers. Thenf(n)∈O(g(n))means that there exists a natural numberN and a positive constantc, such that for everyn>N, f(n)≤cg(n).
Example:2n3−8n2+25∈O(n3)because 2n3−8n2+25≤2n3for alln>2=N.
Examples forOnotation
Question: Isf(n) =3n3+2nlog(n)∈O(n3)?
Answer:Yes, because there is a pair ofc,Nwhich satisfies the definition:
3n3+2nlog(n)≤3n3+2n2≤5n3ifn≥2, sof(n)≤cn3for all n≥Nifc =5,N =2.
Note that the pairc=6,N=3 is also good. Question: Isf(n) =3n3+2nlog(n)∈O(n4)? Answer:Yes, we can verify the definition again: 3n3+2nlog(n)≤3n3+2n2≤5n3≤5n4ifn≥2, so f(n)≤cn4for alln≥Bifc =5,N =2.
Question: Isf(n) =3n3+2nlog(n)∈O(n2)? Answer:No.
Homework: Prove it!
Examples forOnotation
Question: Isf(n) =3n3+2nlog(n)∈O(n3)?
Answer:Yes, because there is a pair ofc,N which satisfies the definition:
3n3+2nlog(n)≤3n3+2n2≤5n3ifn≥2, sof(n)≤cn3for all n≥Nifc =5,N =2.
Note that the pairc=6,N=3 is also good.
Question: Isf(n) =3n3+2nlog(n)∈O(n4)? Answer:Yes, we can verify the definition again: 3n3+2nlog(n)≤3n3+2n2≤5n3≤5n4ifn≥2, so f(n)≤cn4for alln≥Bifc =5,N =2.
Question: Isf(n) =3n3+2nlog(n)∈O(n2)? Answer:No.
Homework: Prove it!
Examples forOnotation
Question: Isf(n) =3n3+2nlog(n)∈O(n3)?
Answer:Yes, because there is a pair ofc,N which satisfies the definition:
3n3+2nlog(n)≤3n3+2n2≤5n3ifn≥2, sof(n)≤cn3for all n≥Nifc =5,N =2.
Note that the pairc=6,N=3 is also good.
Question: Isf(n) =3n3+2nlog(n)∈O(n4)?
Answer:Yes, we can verify the definition again: 3n3+2nlog(n)≤3n3+2n2≤5n3≤5n4ifn≥2, so f(n)≤cn4for alln≥Bifc =5,N =2.
Question: Isf(n) =3n3+2nlog(n)∈O(n2)? Answer:No.
Homework: Prove it!
Examples forOnotation
Question: Isf(n) =3n3+2nlog(n)∈O(n3)?
Answer:Yes, because there is a pair ofc,N which satisfies the definition:
3n3+2nlog(n)≤3n3+2n2≤5n3ifn≥2, sof(n)≤cn3for all n≥Nifc =5,N =2.
Note that the pairc=6,N=3 is also good.
Question: Isf(n) =3n3+2nlog(n)∈O(n4)?
Answer:Yes, we can verify the definition again:
3n3+2nlog(n)≤3n3+2n2≤5n3≤5n4ifn≥2, so f(n)≤cn4for alln≥Bifc=5,N =2.
Question: Isf(n) =3n3+2nlog(n)∈O(n2)? Answer:No.
Homework: Prove it!
Examples forOnotation
Question: Isf(n) =3n3+2nlog(n)∈O(n3)?
Answer:Yes, because there is a pair ofc,N which satisfies the definition:
3n3+2nlog(n)≤3n3+2n2≤5n3ifn≥2, sof(n)≤cn3for all n≥Nifc =5,N =2.
Note that the pairc=6,N=3 is also good.
Question: Isf(n) =3n3+2nlog(n)∈O(n4)?
Answer:Yes, we can verify the definition again:
3n3+2nlog(n)≤3n3+2n2≤5n3≤5n4ifn≥2, so f(n)≤cn4for alln≥Bifc=5,N =2.
Question: Isf(n) =3n3+2nlog(n)∈O(n2)?
Answer:No.
Homework: Prove it!
Examples forOnotation
Question: Isf(n) =3n3+2nlog(n)∈O(n3)?
Answer:Yes, because there is a pair ofc,N which satisfies the definition:
3n3+2nlog(n)≤3n3+2n2≤5n3ifn≥2, sof(n)≤cn3for all n≥Nifc =5,N =2.
Note that the pairc=6,N=3 is also good.
Question: Isf(n) =3n3+2nlog(n)∈O(n4)?
Answer:Yes, we can verify the definition again:
3n3+2nlog(n)≤3n3+2n2≤5n3≤5n4ifn≥2, so f(n)≤cn4for alln≥Bifc=5,N =2.
Question: Isf(n) =3n3+2nlog(n)∈O(n2)?
Answer:No.
Homework: Prove it!
Proof thatn3∈/ O(n2):
We prove it by contradiction.
Assume the contrary, so suppose thatn3∈O(n2).
According to the definition of big O notation, there is a natural numberN and a positive constantcsuch thatn3≤cn2for all n≥N.
After dividing byn2we get thatn≤c for alln≥N.
On the other hand, we know thatlimn→∞n=∞, so the function f(n) =ncan not be bounded by a positive constant.
Therefore we obtained a contradiction, which means that our initial assumption was false.
Remark: The same reasoning shows thatf(n) =nk ∈/ O(nl)if l<k for allk,l ∈R.
Proof thatn3∈/ O(n2):
We prove it by contradiction.
Assume the contrary, so suppose thatn3∈O(n2).
According to the definition of big O notation, there is a natural numberN and a positive constantcsuch thatn3≤cn2for all n≥N.
After dividing byn2we get thatn≤c for alln≥N.
On the other hand, we know thatlimn→∞n=∞, so the function f(n) =ncan not be bounded by a positive constant.
Therefore we obtained a contradiction, which means that our initial assumption was false.
Remark: The same reasoning shows thatf(n) =nk ∈/ O(nl)if l<k for allk,l ∈R.
Proof thatn3∈/ O(n2):
We prove it by contradiction.
Assume the contrary, so suppose thatn3∈O(n2).
According to the definition of big O notation, there is a natural numberN and a positive constantcsuch thatn3≤cn2for all n≥N.
After dividing byn2we get thatn≤c for alln≥N.
On the other hand, we know thatlimn→∞n=∞, so the function f(n) =ncan not be bounded by a positive constant.
Therefore we obtained a contradiction, which means that our initial assumption was false.
Remark: The same reasoning shows thatf(n) =nk ∈/ O(nl)if l<k for allk,l ∈R.
Proof thatn3∈/ O(n2):
We prove it by contradiction.
Assume the contrary, so suppose thatn3∈O(n2).
According to the definition of big O notation, there is a natural numberN and a positive constantcsuch thatn3≤cn2for all n≥N.
After dividing byn2we get thatn≤c for alln≥N.
On the other hand, we know thatlimn→∞n=∞, so the function f(n) =ncan not be bounded by a positive constant.
Therefore we obtained a contradiction, which means that our initial assumption was false.
Remark: The same reasoning shows thatf(n) =nk ∈/ O(nl)if l<k for allk,l ∈R.
Proof thatn3∈/ O(n2):
We prove it by contradiction.
Assume the contrary, so suppose thatn3∈O(n2).
According to the definition of big O notation, there is a natural numberN and a positive constantcsuch thatn3≤cn2for all n≥N.
After dividing byn2we get thatn≤c for alln≥N.
On the other hand, we know thatlimn→∞n=∞, so the function f(n) =ncan not be bounded by a positive constant.
Therefore we obtained a contradiction, which means that our initial assumption was false.
Remark: The same reasoning shows thatf(n) =nk ∈/ O(nl)if l<k for allk,l∈R.
A property of big O notation:
Claim
Iff(n)∈O(g(n))andg(n)∈O(h(n)), thenf(n)∈O(h(n)).
Proof:f(n)∈O(g(n))means that there is ac1>0,N1pair such thatf(n)≤c1g(n)for alln≥N1.
Similaryg(n)∈O(h(n))means that there is ac2>0,N2pair such thatg(n)≤c2h(n)for alln≥N2.
Combining these we obtain that:
f(n)≤c1g(n)≤c1c2h(n)for alln≥max(N1,N2).
Sof(n)∈O(h(n)), becausec1c2>0 soc1c2,max(N1,N2)is a good pair which satisfies the defintion of bigO(h(n)).
The hierarchy of functions
Letf(n)<<g(n)denote thatf(n)∈O(g(n))but
g(n)∈/ O(f(n)).
Claim
▶ log(n)<<nk for any positivek
▶ nk <<2nfor anyk
log(n)<<√
n<<n<<n2<<n3<<n1000<<2n<<en.
The hierarchy of functions
Letf(n)<<g(n)denote thatf(n)∈O(g(n))but
g(n)∈/ O(f(n)).
Claim
▶ log(n)<<nk for any positivek
▶ nk <<2nfor anyk
log(n)<<√
n<<n<<n2<<n3<<n1000<<2n<<en.
The time complexity of addition
Input: Integersaandb. The length of the input is
loga+ logb=n. An elementary step is the addition of two bits.
147 +105 252
10010011 +1101001 11111100
Since we make at most two additions at each column (one is comming from the carry bit), the number of operations is at most 2(max(loga,logb))≤2(loga+ logb) =2n∈O(n).
So the running time is linear in the size of the input. Note that we can not have much faster algorithm for addition since to add two numbers we have to read them and reading requirensteps.
Question: We know that we can add any two integers of lenght 8 in 1 minute. How much time do we need to add two integers of lenght 40?
Answer:Since the time complexity is linear and 40=5·8 we can do this task approximately in 5·1=5 minutes.
The time complexity of addition
Input: Integersaandb. The length of the input is
loga+ logb=n. An elementary step is the addition of two bits.
147 +105 252
10010011 +1101001 11111100
Since we make at most two additions at each column (one is comming from the carry bit), the number of operations is at most 2(max(loga,logb))≤2(loga+ logb) =2n∈O(n).
So the running time is linear in the size of the input. Note that we can not have much faster algorithm for addition since to add two numbers we have to read them and reading requirensteps.
Question: We know that we can add any two integers of lenght 8 in 1 minute. How much time do we need to add two integers of lenght 40?
Answer:Since the time complexity is linear and 40=5·8 we can do this task approximately in 5·1=5 minutes.
The time complexity of addition
Input: Integersaandb. The length of the input is
loga+ logb=n. An elementary step is the addition of two bits.
147 +105 252
10010011 +1101001 11111100
Since we make at most two additions at each column (one is comming from the carry bit), the number of operations is at most 2(max(loga,logb))≤2(loga+ logb) =2n∈O(n).
So the running time is linear in the size of the input. Note that we can not have much faster algorithm for addition since to add two numbers we have to read them and reading requirensteps.
Question: We know that we can add any two integers of lenght 8 in 1 minute. How much time do we need to add two integers of lenght 40?
Answer:Since the time complexity is linear and 40=5·8 we can do this task approximately in 5·1=5 minutes.
The benefits ofOnotation
▶ We can give useful estimates on time complexities which are easier to work with. Instead ofn3+2n2−10n+ log(n) we can writeO(n3).
▶ We can compare algorithms by their time complexities.
For example if we have algorithmsAandBfor the same problem and the time complexity ofAis inO(n2)but the the complexity ofBis not inO(n2), then algorithmAruns faster on big inputs.
▶ We can make predictions on the running time of the algorithm on long inputs.
Warning!f(n)∈O(g(n))does not mean that the growing rate off andgare the same!
ΩandΘnotations
Letf(n)andg(n)be non negative functions over the positive integers.
f(n)∈O(g(n))means that a constant multiple ofg(n)is an upper boundonf(n)after a while, more precisely:
∃c>0,N∈Zsuch thatf(n)≤cg(n)for alln≥N.
We can state a similar lower bound onf by using theΩ notation:
Definition:f(n)∈Ω(g(n))means that a constant multiple of g(n)is alower boundonf(n)after a while, more precisely:
∃c>0,N∈Zsuch thatf(n)≥cg(n)for alln≥N.
Definition:We say thatf(n)∈Θ(g(n))iff(n)∈O(g(n))and f(n)∈Ω(g(n)).
f(n)∈Θ(g(n))means that the growing rate off(n)andg(n)is the same after a while.
ΩandΘnotations
Letf(n)andg(n)be non negative functions over the positive integers.
f(n)∈O(g(n))means that a constant multiple ofg(n)is an upper boundonf(n)after a while, more precisely:
∃c>0,N∈Zsuch thatf(n)≤cg(n)for alln≥N.
We can state a similar lower bound onf by using theΩ notation:
Definition:f(n)∈Ω(g(n))means that a constant multiple of g(n)is alower boundonf(n)after a while, more precisely:
∃c>0,N∈Zsuch thatf(n)≥cg(n)for alln≥N.
Definition:We say thatf(n)∈Θ(g(n))iff(n)∈O(g(n))and f(n)∈Ω(g(n)).
f(n)∈Θ(g(n))means that the growing rate off(n)andg(n)is the same after a while.
Example forΘ
f(n) =n2+3n+1∈Θ(n2), because:
▶ n2+3n+1≤n2+3n2+n2=5n2ifn≥1, sof(n)∈O(n2).
▶ n2+3n+1≥n2ifn≥0, sof(n)∈Ω(n2).
What kind of time complexity do we like?
Of course we want to complete a task as soon as possible, so we usually choose the faster algorithm.
Definition
We say that an algorithm haspolynomial time complexity, or it is calledpolynomialfor short, if its time complexity is in O(nk)for a fixedk.
We prefer polynomial algorithms, because technological advancement help us to solve bigger and bigger problems by execution of polynomial algorithms. What we do not like are algorithms whose time complexity is an exponential function of the input size.
Comparison of polynomial and exponential time complexity
LetfA(n) =n7andfB(n) =2nbe the time complexities of algorithmsAandB, respectively. We consider a recent computer processor which can apply 1010elementary operations every second.
The required amount of time to run the algorithms on an input of sizen:
Input size 10 30 50 70
Algorithm A 10−3sec
2 secs 1.2 minutes 14 minutes
Algorithm B 10−7sec
0.1 secs 1.3 days 3744 years Note:Algorithms with exponential time complexity are not always bad. Some of them are used in practice when the size of each possible input is small.
Comparison of polynomial and exponential time complexity
LetfA(n) =n7andfB(n) =2nbe the time complexities of algorithmsAandB, respectively. We consider a recent computer processor which can apply 1010elementary operations every second.
The required amount of time to run the algorithms on an input of sizen:
Input size 10 30 50 70
Algorithm A 10−3sec
2 secs 1.2 minutes 14 minutes
Algorithm B 10−7sec
0.1 secs 1.3 days 3744 years Note:Algorithms with exponential time complexity are not always bad. Some of them are used in practice when the size of each possible input is small.
Comparison of polynomial and exponential time complexity
LetfA(n) =n7andfB(n) =2nbe the time complexities of algorithmsAandB, respectively. We consider a recent computer processor which can apply 1010elementary operations every second.
The required amount of time to run the algorithms on an input of sizen:
Input size 10 30 50 70
Algorithm A 10−3sec 2 secs
1.2 minutes 14 minutes
Algorithm B 10−7sec 0.1 secs
1.3 days 3744 years Note:Algorithms with exponential time complexity are not always bad. Some of them are used in practice when the size of each possible input is small.
Comparison of polynomial and exponential time complexity
LetfA(n) =n7andfB(n) =2nbe the time complexities of algorithmsAandB, respectively. We consider a recent computer processor which can apply 1010elementary operations every second.
The required amount of time to run the algorithms on an input of sizen:
Input size 10 30 50 70
Algorithm A 10−3sec 2 secs 1.2 minutes
14 minutes
Algorithm B 10−7sec 0.1 secs 1.3 days
3744 years Note:Algorithms with exponential time complexity are not always bad. Some of them are used in practice when the size of each possible input is small.
Comparison of polynomial and exponential time complexity
LetfA(n) =n7andfB(n) =2nbe the time complexities of algorithmsAandB, respectively. We consider a recent computer processor which can apply 1010elementary operations every second.
The required amount of time to run the algorithms on an input of sizen:
Input size 10 30 50 70
Algorithm A 10−3sec 2 secs 1.2 minutes 14 minutes Algorithm B 10−7sec 0.1 secs 1.3 days 3744 years
Note:Algorithms with exponential time complexity are not always bad. Some of them are used in practice when the size of each possible input is small.
Comparison of polynomial and exponential time complexity
LetfA(n) =n7andfB(n) =2nbe the time complexities of algorithmsAandB, respectively. We consider a recent computer processor which can apply 1010elementary operations every second.
The required amount of time to run the algorithms on an input of sizen:
Input size 10 30 50 70
Algorithm A 10−3sec 2 secs 1.2 minutes 14 minutes Algorithm B 10−7sec 0.1 secs 1.3 days 3744 years Note:Algorithms with exponential time complexity are not always bad. Some of them are used in practice when the size of each possible input is small.
Introduction to Graphs Definition
Agraphis aG= (V,E)ordered pair of sets, whereV is a nonempty set andE is a set of pairs made from the elements of V. The elements ofV are calledverticesornodes. We say that an element ofE is anedge. The number of vertices and edges are denoted byv(G)ande(G), respecitvely.
Example:
V(G) ={1,2,3,4}
E(G) =
{{1,2};{1,3};{1,4};{3,4}}
Drawing of this graph:
2
3 4
1 {1,2}
{3,4}
{1,3} {1,4}
Drawing of a graph
We can draw a graph on the plane. In a drawing each vertex is represented by a disc and an edge is a curve ending at its vertices.
Note that a drawing of a graph is not equivalent to the graph itself! A graph have many different drawings.
Example:Two different drawings of the same graph:
Real world examples for graphs Transit maps:
Real world examples for graphs The internet in 1998:
Real world examples for graphs Social networks:
Real world examples for graphs Electric circuits:
Loops, multiple edges
Definition:If edgeeis the pair{v,w}then we say that verticesv andw are theend verticesorend pointsofe. If v1=v2, theneis called as aloop. If two different edges have the same end vertices, then they are calledmultipleorparallel edges. Asimple graphhave neither loops nor multiple edges.
Examples: V(G) ={1,2,3}
E(G) ={{1,1};{1,2};{2,3};{2;3}}
1 and 2 are end vertices of edge g ={1,2}.
h={1,1}is a loop.
eandf are multiple edges.
This graph is not simple!
1
h={1,1}
g={1,2}
2 3
e={2,3}
f={2,3}
Adjacency and incidency
Definition:A vertexv and an edgeeareincidentifv is an end vertex ofe.
Edgeseandf areadjacent, if they have a common end vertex.
Verticesu andv areadjacent, if{u,v}is an edge of the graph.
Anisolated vertexis not incident to any edge.
The number of edges which are incident tov is called the degree ofv and denoted byd(v).
Examples:
Vertex 1 is incident to edge{1,2}.
Vertices 1 and 2 are adjacent.
Vertices 2 and 3 are not adjacent.
Edges{1,3}and{3,4}are adjacent.
5 is an isolated vertex..
d(1) =3,d(2) =1.
2
3 4
1 {1,2}
{3,4}
{1,3} {1,4}
5
Subgraphs
Definition:H is asubgraphof graphG, ifV(H)⊆V(G), E(H)⊆E(G)andH is a graph. We denote this relation with:
H⊆G.
Example:H is a subgraph ofG, butGis not a subgraph ofH.
G H
1 2
4
3
5
2 3
5 4
Induced subgraphs
Definition:H is aninduced subgraphof graphG, ifH ⊆G andE(H)contains all of the edges ofGthat have both endpoints inV(H).
Example:
His an induced subgraph ofG.
2
H 3
5 1
G 1
3 2
4 5
Remark: Each induced subgraph ofGcan be obtained fromG by deleting some of its vertices and the edges which are incident to those vertices.
Induced subgraphs
Definition:H is aninduced subgraphof graphG, ifH ⊆G andE(H)contains all of the edges ofGthat have both endpoints inV(H).
Example:
His an induced subgraph ofG.
2
H 3
5 1
G 1
3 2
4 5
Remark: Each induced subgraph ofGcan be obtained fromG by deleting some of its vertices and the edges which are incident to those vertices.
Definition:Awalk in a graphis an alternating sequence of vertices and edges(v0,e1,v1,e2,v2, . . . ,vk−1,ek,vk)such that ei is incident to verticesvi−1andvi for alli. Ifv0=vk, then we say that the walk isclosed.
v3
e1
v0 v1,v4 v2,v5 v6 e2,e5 e6
e3 e4
A walk is called atrailif all of its edges are different. Apathis a trail where all of the vertices are different. Acycleis a closed trail where all the vertices are different except the first and the last.
e1
v0
e2 e3
v1 v2 vv33 v1
v2 v3
e2
e3 e1 e4
v0,v4
Definition:Awalk in a graphis an alternating sequence of vertices and edges(v0,e1,v1,e2,v2, . . . ,vk−1,ek,vk)such that ei is incident to verticesvi−1andvi for alli. Ifv0=vk, then we say that the walk isclosed.
v3
e1
v1,v4 v2,v5 v6 e2,e5 e6
e3
e4
e7
v0,v7
A walk is called atrailif all of its edges are different. Apathis a trail where all of the vertices are different. Acycleis a closed trail where all the vertices are different except the first and the last.
e1 v0
e2 e3
v1 v2 vv33 v1
v2
v3
e2 e3
e1
e4
v0,v4
Definition:Awalk in a graphis an alternating sequence of vertices and edges(v0,e1,v1,e2,v2, . . . ,vk−1,ek,vk)such that ei is incident to verticesvi−1andvi for alli. Ifv0=vk, then we say that the walk isclosed.
v3
e1
v1,v4 v2,v5 v6 e2,e5 e6
e3
e4
e7
v0,v7
A walk is called atrailif all of its edges are different. Apathis a trail where all of the vertices are different. Acycleis a closed trail where all the vertices are different except the first and the last.
e1 v0
e2 e3
v1 v2 vv33 v1
v2
v3
e2 e3
e1
e4
v0,v4