Establishing secure connection… Loading editor… Preparing document…
Navigation

Fill and Sign the How Do I Get Medical Power of Attorney for a Child in Texas Form

Fill and Sign the How Do I Get Medical Power of Attorney for a Child in Texas Form

How it works

Open the document and fill out all its fields.
Apply your legally-binding eSignature.
Save and invite other recipients to sign it.

Rate template

4.6
50 votes
Page 1 Lecture IV “Read Euler, read Euler, he is our master in everything” – Pierre-Simon Laplace (1749–1827) Lecture IV Pure Graph Problems Graph Theory is said to have originated with Euler (1707–1783) who was asked to resolve an pastime question of the citizens of K¨nigsberg whether it is possible to traverse all the 7 bridges joining two islands o and the mainland, without retracing any path. Euler recognized in this problem the essense of Leibnitz’s interest in founding a new kind of mathematics (“analysis situs”) which might be interpreted as topological or combinatorial analysis in modern language. A graph is fundamentally a set of mathematical relations (called incidence relations) connecting two sets, a vertex set V and an edge set E. A simple notion of an edge e ∈ E is where e is a pair of vertices u, v ∈ V . The pair can be ordered e = (u, v) or unordered e = {u, v}, leading to two different kinds of graphs. We shall denote1 such a pair by “u−v”, and rely on context to determine whether an ordered or unordered edge is meant. For unordered edges, we have u−v = v−u; but for ordered edges, u−v = v−u unless u = v. We say the vertices u and v are incident on u−v. Graphs are useful for modeling abstract mathematical relations in computer science as well as in many other disciplines. Here are some examples of graphs: Adjacency between Countries In Figure 1(a), we have a map of the political boundaries separating 7 countries. Figure 1(b) shows a graph with vertex set V = {1, 2, . . . , 7} representing these countries. An edge i−j represent relationship between countries i and j that share a continuous common border. Note that countries 2 and 3 share two continuous common borders, and so we have two copies of the edge 2−3. Flight Connections A graph can represent the flight connections of a particular airline, with the set V representing the airports and the set E representing the flight segments that connect pairs of airports. Each edge will typically have auxiliary data associated with it. For example, the data may be numbers representing flying time of that flight segment. Hypertext Links In hypertext documents on the world wide web, a document will generally have links (“hyper-references”) to other documents. We can represent these linkages structure by a graph whose vertices V represent individual documents, and each edge (u, v) ∈ V × V indicates that there is a link from document u to document v. In many applications, our graphs have associated data such as numerical values (“weights”) attached to the edges and vertices. These are called weighted graphs. The flight connection graph above is an example of this. Graphs without such numerical values are called pure graphs. In this chapter, we restrict attention to pure graph problems; weighted graphs will be treated in later chapters. The algorithmic issues of pure 1 We have taken this highly suggestive notation from [2]. c Chee-Keng Yap Basic Version October 13, 2005 Page 2 Lecture IV §1. Bigraphs and Digraphs 1 1 2 3 3 2 7 7 5 5 4 4 6 6 (a) (b) Figure 1: (a) Political map of 7 countries (b) Adjacency relationship of countries graphs mostly relate to the concepts of connectivity and paths. Many of these algorithms can be embedded in one of two graph searching strategies called depth-first search (DFS) and breadth-first search (BFS). Two other important, if less elementary, problems of pure graphs are: testing if two graphs are isomorphic, and testing if a graph is planar. We will treat planarity testing. Tarjan [3] was one of the first to systematically study the DFS algorithm and its applications. A lucid account of basic graph algorithms may be found Sedgewick [2]. §1. Bigraphs and Digraphs Basic graph definitions are given. In this book, “graphs” refer to either directed graphs (“digraphs”) or undirected graphs (“bigraphs”). All graphs are assumed to be simple. Set-Theoretic Notations for Simple Graphs. A graph G is basically given by two sets, V and E. These are called the vertex set and edge set, respectively. We begin by describing “simple graphs” in the three most important cases. The terminology “simple” will become clear later. For any set V and integer k ≥ 0, let V k, 2V , V k denote, respectively, the k-fold Cartesian product of V , power set of V and the set of k-subsets of V . The first two notations (V k , 2V ) are standard notations; the last one is less so. These notations have a certain “umbral quality” because they satisfy the following equations on set cardinality: V k = |V |k , 2V = 2|V | , V k = |V | . k We can define our 3 varieties of simple graphs as follows: • A hypergraph is a pair G = (V, E) where E ⊆ 2V . • A directed graph (or simply, digraph) is a pair G = (V, E) where E ⊆ V 2 . c Chee-Keng Yap Basic Version October 13, 2005 Page 3 Lecture IV §1. Bigraphs and Digraphs • A undirected graph (or simply, bigraph) is a pair G = (V, E) where E ⊆ V 2 . In all three cases, the elements of V are called vertices. Elements of E may be called edges, but we shall reserve this term only for digraphs or bigraphs; for hypergraphs, we call them hyperedges. Hence, each edge is defined by a pair u, v ∈ V , and we use the notation u−v to represent the corresponding edge of the digraph or bigraph. This convention is useful because many of our definitions cover both digraphs and bigraphs. Similarly, the term “graph” will cover digraphs and bigraphs. Some basic graph terminology is collected in §I (Appendix A). Hypergraphs are sometimes called set systems (see matroid theory in Chapter 5). Graphical representation of graphs. Bigraphs and digraphs are “linear graphs” in which each edge is incident on one or two vertices. Such graphs have natural graphical representation: elements of V are represented by points (or circles) in the plane and elements of E are represented by finite curve segments connecting these points. 5 a d 1 4 6 b c 2 3 e (a) bigraph (b) digraph Figure 2: Bigraph and digraph. In Figure 2(a), we display a bigraph (V, E) where V = {a, b, c, d, e} and E = {a−b, b−c, c−d, d−a, c−e, b−d}. In Figure 2(b), we display a digraph (V, E) where V = {1, 2, . . . , 6} and E = {1−5, 5−4, 4−3, 3−2, 2−1, 1−6, 2−6, 3−6, 4−6, 5−6, 5−2, 5−3, 2−3}. We display a digraph edge u−v by drawing an arrow from the start vertex u to the stop vertex v. Thus, in Figure 2(b), all the edges incident on vertex 6 has 6 as the stop vertex and so the arrow heads are all pointed at 6. Thus edges are “directed” from the start to the stop vertex. In contrast, the curve segments in bigraphs are undirected (bi-directional). Auxiliary Data Convention. We want to associate some additional data with a graph. For instance, we may want to designate two vertices s, t ∈ V as the source and destination vertices. In this case we may write G = (V, E; s, t). In general, auxiliary data will be separated from the pure graph data by a semi-colon, G = (V, E; · · · ). Exercises Exercise 1.1: Prove or disprove: there exists a bigraph G = (V, E) where |V | is odd and the degree of each vertex is odd. ♦ c Chee-Keng Yap Basic Version October 13, 2005 Lecture IV §2. Path Concepts Page 4 Exercise 1.2: (i) How many bigraphs, digraphs, hypergraphs are there on n vertices? (ii) How many non-isomorphic bigraphs, digraphs, hypergraphs are there on n vertices? Estimate these with upper and lower bounds. ♦ Exercise 1.3: A trigraph is G = (V, E) where E ⊆ V . An element f ∈ E is called a face (not “edge”). 3 A pair {u, v} ∈ V is called an edge provided {u, v} ⊆ f for some face f ; in this case, we say f is 2 incident on e, and e bound f ). The trigraph is an (abstract) surface if each edge bounds exactly two faces. How many nonisomorphic surfaces are there on n = |V | vertices? First consider the case n = 4, 5, 6. ♦ End Exercises §2. Path Concepts Most basic concepts in pure graphs revolve around the notion of a path. Let G = (V, E) be a graph (i.e., digraph or bigraph). If u−v is an edge, we say that v is adjacent to u. Note that adjacency is an asymmetric relation for digraphs but symmetric for bigraphs. A typical usage is this: “for each v adjacent to u, do . . . v . . .”. Let p = (v0 , v1 , , . . . , vk ), (k ≥ 0) be a sequence of vertices. We call p a path if vi is adjacent to vi−1 for all i = 1, 2, . . . , k. In this case, we can denote p by (v0 −v1 − · · · −vk ). The length of p is k (not k + 1). The path is trivial if it has length 0, p = (v0 ). Call v0 is the source and vk the target of p. Both v0 and vk are endpoints of p. We also say p is a path from v0 to vk The path p is closed if v0 = vk and simple if all its vertices, with the possible exception of v0 = vk , are distinct. Note that a trivial path is always closed and simple. The reverse of p = (v0 −v1 − · · · −vk ) is the path pR :=(vk −vk−1 − · · · −v0 ). In a bigraph, p is a path iff pR is a path. The Path Metric. Define δG (u, v), or simply δ(u, v), to be the minimum length of a path from u to v. If there is no path from u to v, then δ(u, v) = ∞. We also call δ(u, v) the link distance from u to v – this terminology will be useful when δ(u, v) is later generalized to weighted graphs, and when we still need to refer to the ungeneralized concept. It is easy to see that • δ(u, v) ≥ 0, with equality iff u = v. • (Triangular Inequality) δ(u, v) ≤ δ(u, w) + δ(w, v). • When G is a bigraph, then δ(u, v) = δ(v, u). These three properties amounts to saying that δ(u, v) is a metric on V in the case of a bigraph. If δ(u, v) < ∞, we say v is reachable from u. c Chee-Keng Yap Basic Version October 13, 2005 Subpaths. Page 5 Lecture IV §2. Path Concepts Let path p and q be two paths: p = (v0 −v1 − · · · −vk ), q = (u0 −u1 − · · · −uℓ ), If p terminates at the vertex where path q begins, i.e., vk = u0 , then the operation of concatenation is well-defined. The concatenation of p and q gives a new path, written p; q :=(v0 −v1 − · · · −vk−1 −vk −u1 −u2 − · · · −uℓ ). Note that the common vertex vk and u0 are “merged” in the new path. Clearly concatenation of paths is associative: (p; q); r = p; (q; r), which we may simply write as p; q; r. We say that a path p contains q as a subpath if p = p′ ; q; p′′ for some p′ , p′′ . If in addition, q is a closed path, we can excise q from p to obtain the path p′ ; p′′ . Whenever we write a concatenation expression “p; q”, etc, we will assume that the operation is well-defined. Cycles. Two paths p, q are cyclic equivalent if there exists paths r, r′ such that p = r; r′ , q = r′ ; r. We write p ≡ q in this case. Clearly p must be a closed path because the source of r and the target of r′ must be the same in order for r′ ; r to be well-defined, but this means that the source and target of p are identical. Similarly, q must be a closed path. It is easily checked that cyclic equivalence is a mathematical equivalence relation. For instance, the following four closed paths are cyclic equivalent: (1−2−3−4−1) ≡ (2−3−4−1−2) ≡ (3−4−1−2−3) ≡ (4−1−2−3−4). The first and the third closed paths are cyclic equivalent because of the following decomposition: (1−2−3−4−1) = (1−2−3); (3−4−1), (3−4−1−2−3) = (3−4−1); (1−2−3). We define a cycle as an equivalence class of closed paths. If the equivalence class of p is the cycle Z, we call p a representative of Z; if p = (v0 , v1 , . . . , vk ) then we write Z as Z = [p] = [v1 −v2 − · · · −vk ] = [v2 −v3 − · · · −vk −v1 ]. Note that if p has k + 1 vertices, then [p] is written with only k vertices since the last vertex may be omitted. In particular, a trivial path p = (v0 ) gives rise to the cycle which is an empty sequence Z = [ ]. We call this the trivial cycle. In case of digraphs, we can have self-loops of the form u−u and p = (u, u) is a closed path. The corresponding cycle is [u]. Path concepts that are invariant under cyclic equivalence are “transferred” to cycles automatically: for instance, we may speak of the length or reverse of a cycle, etc. A cycle [v1 − · · · −vk ] is simple if the vertices v1 , . . . , vk are distinct. If we excise a finite number of closed subpaths from a closed path p, we obtain a closed subpath q; call [q] a subcycle of [p]. For instance, [1−2−3] is a subcycle of [1−2−a−b−c−2−3−d−e−3]. From the general transfer principle, we say a cycle Z = [p] is trivial iff p is a trivial path. We next wish to define the notion of a “cyclic graph”. For a digraph G, we say it is cyclic if it contains any nontrivial cycle. But for bigraphs, this simple definition will not do. For instance, for any edge u−v in a bigraph, we get the closed path (u−v−u) and hence the non-trivial cycle [u−v]. Thus our definition of “cyclic graphs” is unsuitable for bigraphs. Thus, for bigraphs, we proceed as follows: first, define a closed path p = (v0 −v1 − · · · −vk ) to be reducible if one of the following two conditions hold: c Chee-Keng Yap Basic Version October 13, 2005 Page 6 Lecture IV §2. Path Concepts • vi−1 = vi+1 for some i = 1, . . . , k − 1, • k ≥ 2 and v1 = vk−1 . Otherwise p is said to be irreducible. If a closed path is reducible then it is a non-simple path. A cycle Z = [p] is reducible iff any of its representative is reducible. So the trivial cycle and self-loop cycle [u] is irreducible. Finally, a bigraph is said to be cyclic if it contains some irreducible non-trivial cycle. Note that irreducible non-trivial cycles have length at least 3. Connectivity. Let G = (V, E) be a graph (either di- or bigraph). Two vertices u, v in G are connected if there is a path from u to v and a path from v to u. Equivalently, δ(u, v) and δ(v, u) are both finite. Clearly, connectedness is an equivalence relation on V . A subset C of V is a connected component of G if it is an equivalence class of this relation. For short, we may simply call C a component of G. Alternatively, C is a non-empty maximal subset of vertices in which any two are connected. Thus V is partitioned into disjoint components. If G has only one connected component, it is said to be connected. When |C| = 1, we call it a trivial component. The subgraph of G induced by C is called a component graph of G. NOTE: It is customary, and for emphasis, we may add the qualifier “strong” when discussing components of digraphs. 5 5 1 1 4 2, 3, 5 6 2 3 (a) 4 2 3 (b) 6 (c) Figure 3: (a) Digraph G6 , (b) Component graph of C = {2, 3, 5}, (c) Reduced graph Gc 6 For example, the graph G6 in Figure 3(a) has C2 = {2, 3, 5} as a component. The component graph corresponding to C is shown in Figure 3(b). The other components of G are {1}, {4}, {6}, all trivial. Given G, we define the reduced graph Gc = (V c , E c ) whose vertices comprise the components of G, and whose edges are (C, C ′ ) ∈ E c such that there exists an edge from some vertex in C to some vertex in C ′ . This is illustrated in Figure 3(c). CLAIM: Gc is acycic. In proof, suppose there is a non-trivial cycle Z c in Gc . This translates into a cycle Z in G that involves at least two components C, C ′ . The existence of Z contradicts the assumption that C, C ′ are distinct components. Note that the reduced graph is essentially trivial for bigraphs, so this concept is only applied to digraphs. But for bigraphs, we will later introduce a stronger notion of connectivity, called bi-connectivity. DAGs and Trees. We have just defined cyclic bigraphs and digraphs. A graph is acyclic if it is not cyclic. Acyclic graphs is a very important subclass of graphs. The common acronym for a directed acyclic c Chee-Keng Yap Basic Version October 13, 2005 §3. Graph Representation Lecture IV Page 7 graph is DAG. A tree is a DAG in which there is a unique vertex u0 called the root such that there exists a unique path from u0 to any other vertex. Trees are ubiquitous in computer science. Thus, we have free trees, rooted trees, ordered trees, search trees, etc. A free tree is a connected acyclic bigraph. Such a tree it has exactly |V | − 1 edges and for every pair of vertices, there is a unique path connecting them. These two properties could also be used as the definition of a free tree. A rooted tree is a free tree together with a distinguished vertex called the root. We can convert a rooted tree into a directed graph in two ways: by directing each of its edges away from the root (so the edges are child pointers), or by directing each edge towards the root (so the edges are parent pointers). Exercises Exercise 2.1: Let u be a vertex in a graph G. Can u be adjacent to itself if G is a bigraph? If G is a digraph? ♦ Exercise 2.2: Describe an efficient algorithm which, given two closed paths p = (v0 −v1 − · · · −vk ) and q = (u0 −u1 − · · · −uℓ ), determine whether they represent the same cycle (i.e., are equivalent). What is the complexity of your algorithm? Make explicit any assumptions you need about representation of paths and vertices. ♦ Exercises §3. Graph Representation The representation of graphs in computers is relatively straightforward if we assume array capabilities or pointer structures. The three main representations are: • Edge list: a linked list of the vertices of G and a list edges of G. The lists may be singly- or doublylinked. E.g., the edge list representations of the two graphs in Figure 2 would be {a−b, b−c, c−d, d−a, d−b, c−e} and {1−6, 2−1, 2−3, 2−6, 3−2, 3−6, 4−3, 4−6, 5−2, 5−3, 5−6}. • Adjacency list: a list of the vertices of G and for each vertex v, we store the list of vertices that are adjacent to v. If the vertices adjacent to u are v1 , v2 , . . . , vm , we may denote an adjacency list for u by u : (v1 , v2 , . . . , vm ). E.g., the adjacency list representation of the graphs in Figure 2 are {a : (b, d), b : (a, d, c), c : (b, d, e), d : (a, b, c), e : (c)} and {1 : (5, 6), 2 : (1, 3, 6), 3 : (2, 6), 4 : (3, 6), 5 : (4, 6), 6 : ()} c Chee-Keng Yap Basic Version October 13, 2005 §4. Breadth First Search Lecture IV Page 8 • Adjacency matrix: this is a n × n Boolean matrix where the (i, j)-th entry is 1 iff vertex j is adjacent to vertex i. E.g., the adjacency matrix representation of the graphs in Figure 2 are     1 0 0 0 0 0 1 a 0 1 0 1 0 2  1 0 1 0 0 1   1 0 1 1 0    b   3  0 1 0 0 0 1   0 1 0 1 1    c   4  0 0 1 0 0 1     1 1 1 0 0  , d 5  0 1 1 1 0 1  0 0 1 0 0 e 6 0 0 0 0 0 0 a b c d e 1 2 3 4 5 6 . Note that the matrix for bigraphs are symmetric. The adjacency matrix can be generalized to store arbitrary values to represent weighted graphs. Size Parameter. Two size parameters are used in measuring the computational complexity of graph problems: |V | and |E|. These are typically denoted by n and m. It is clear that m is not completely independent, but satisfies the bounds 0 ≤ m ≤ n2 . If m = o(n2 ) for graphs in a family G, we say G is a sparse family of graphs; otherwise the family is dense. For example, the family G of planar graphs is sparse because m = O(n) in planar graphs. Some computational techniques can exploit sparsity of input graphs. Thus, the first two method of representing graphs use O(m + n) space while the last method uses O(n2 ) space. Thus the adjacency matrix representation is generally not used for sparse graphs. Arrays. If A is an array, and i ≤ j are integers, we write A[i..j] to indicate that the array A has j − i + 1 elements which are indexed from i to j. Thus A contains the set of elements {A[i], A[i + 1], . . . , A[j]}. In description of graph algorithms, it is convenient to assume that the vertex set of a graph is V = {1, 2, . . . , n}. The list structures can now be replaced by arrays indexed by the vertex set, affording great simplification. For instance, this allows us to iterate over all the vertices using an integer variable. To associate an attribute A with each vertex, we can use an array A[1..n] where A[i] is the value of the Aattribute of vertex i. For instance, if the attribute is weight, then A[i] is the weight of vertex i. Coloring Scheme. In many graph algorithms we need to keep track of some “processing status” of the vertices. Initially, the vertices are unprocessed, and finally they are processed. Sometimes, we want to indicate an intermediate status of being partially processed. Viewing the status as colors, we then have a three-color scheme: white or gray or black. They correspond to unprocessed, partially processed and completely processed statuses. Alternatively, the three colors may be called unseen, seen and done (resp.). Initially, all vertices are unseen or white. The color transitions of each vertex are always in this order: white ⇒ gray ⇒ black, unseen ⇒ seen ⇒ done. (1) For instance, let the color status be represented by the integer array color[1..n], with the convention that white/unseen is 0, gray/seen is 1 and black/done is 2. Then color transition for vertex i is achieved by the increment operation color[i]++. Sometimes, a two-color scheme is sufficient: in this case we omit the gray color or the done status. §4. Breadth First Search c Chee-Keng Yap Basic Version October 13, 2005 Page 9 Lecture IV §4. Breadth First Search A graph traversal is any systematic method to “visit” each vertex and edge of a graph. Here is the generic graph traversal algorithm: assume that all vertices are initially colored as unseen. Start from any vertex s0 , mark it as seen. Add all the edges out of s0 to a set Q. While Q is not empty, remove an edge (u, v) from Q. If v is unseen, color it as seen, and add add edges out of v to Q. The set Q is stored in some container data-structure. There are two standard containers: either a queue or a stack. These two data structures give rise to the two algorithms for graph traversal: Breadth First Search (BFS) and Depth First Search (DFS), respectively. Both traversal methods apply to digraphs and bigraphs. However, BFS is often described for bigraphs only and DFS for digraphs only. We will follow this tradition. In both algorithms, we assume that the input graph G = (V, E; s0 ) is represented by adjacency lists, and s0 ∈ V is called the source for the search. The idea of BFS is to systematically visit vertices that are nearer to s0 before visiting those vertices that are further away. For example, suppose we start searching from vertex s0 = a in the bigraph of Figure 2(a). From vertex a, we first visit the vertices b and d which are distance 1 from vertex a. Next, from vertex b, we find vertices c and d that are distance 1 away; but we only visit vertex c but not vertex d (which had already been visited). And so on. The trace of this search can be represented by a tree as shown in Figure 4(a). It is called the “BFS tree”. a b a d b c c e d e (a) (b) Figure 4: (a) BFS tree. (b) Non-tree edges. More precisely, recall that δ(u, v) denote the distance from u to v in a graph. The characteristic property of the BFS algorithm is that we will visit u before v whenever δ(s0 , u) < δ(s0 , v) < ∞. (2) If δ(s0 , u) = ∞, then u will not be visited from s0 . The BFS algorithm does not explicitly compute the relation (2) to decide the next node to visit: we will prove below that this is a consequence of using the queue data structure. The key to the BFS algorithm is the queue ADT which supports the insertion and deletion of an item following the First-In First-Out (FIFO) discipline. If Q is a queue and x an item, we denote the insert and delete operations by Q.enqueue(x), x ← Q.dequeue(), respectively. To keep track of the status of vertices we will use the color scheme in the previous section (see (1)). We could use three colors, but for our current purposes, two suffice: white/gray or unseen/seen. We formulate our BFS algorithm as a “skeleton” for accomplishing application-specific functions: c Chee-Keng Yap Basic Version October 13, 2005 Lecture IV §4. Breadth First Search Page 10 BFS Algorithm Input: G = (V, E; s0 ) a graph (bi- or di-). Output: This is application specific. ⊲ Initialization: 0 Initialize the queue Q to contain just s0 . 1 INIT(G, s0 ) ⊲ Main Loop: while Q = ∅ do 2 u ← Q.dequeue(). ⊳ Begin processing u 3 for each v adjacent to u do ⊳ Process edge u−v 4 PREVISIT(v, u) ⊳ Previsit v from u 5 if v is unseen then 6 color v seen 7 VISIT(v, u) ⊳ Visit v from u 8 Q.enqueue(v). 9 POSTVISIT(u) This algorithm is a skeleton because we have embedded in it several subroutines INIT, PREVISIT, VISIT and POSTVISIT (3) that are application-specific; these subroutines will be assumed to be2 null operations unless otherwise specified. We refer to the subroutines in (3) as shell subroutines. An application of BFS will amount to filling these shell subroutines with actual code. We can usually omit the PREVISIT step, but see §6 for an example of using this subroutine. Note that VISIT(v, u) represents visiting v from u; a similar interpretation holds for PREVISIT(v, u). If this BFS algorithm is a standalone code, then INIT(G, s0 ) may be expected to initialize the color of all vertices to unseen, and s0 has color seen. There is an underlying tree structure in each BFS computation: the root is s0 . If v is seen from u (see Line 6 in the BFS Algorithm), then the edge u−v is an edge in this tree. This tree is called the BFS tree (see Figure 4(a)). A BFS listing at s0 is a list of all the vertices reachable from s0 in which a vertex u appears before another vertex v in the list whenever (2) holds. E.g., let G be the bigraph in Figure 2(a) and s0 is vertex a. Then two possible BFS listing at a are (a, b, d, c, e) and (a, d, b, c, e). (4) We can produce such a listing just by enumerating the vertices of the BFS tree in the order they are visited. Applications of BFS. problems: We now show how to program the shell subroutines in BFS to solve a variety of • Suppose you wish to print a BFS listing of the vertices reachable from s0 . Then POSTVISIT(u) simply prints the name of u. Other subroutines can remain null operations. • Suppose you wish to compute the BFS tree T . If we view T as a set of edges, then INIT(G, s0 ) could initial a set T to be empty. In VISIT(v, u), we add the edge u−v to T . 2 Alternatively, we could fold the coloring steps into these subroutines, so that they are non-null. We choose to expose these coloring steps in BFS. c Chee-Keng Yap Basic Version October 13, 2005 Lecture IV §4. Breadth First Search Page 11 • Suppose you wish to determine the depth d[u] of each vertex u in the BFS Tree. Then INIT(G, s0 ) could initialize ∞ if u = s0 , d[u] = 0 if u = s0 . and in VISIT(v, u), we set d[v] = 1 + d[u]. Also, the coloring scheme (unseen/seen) could be implemented using the array d[1..n] instead of having a separate array. More precisely, we simply use the special value d[i] = −1 to indicate unseen vertices; seen vertices satisfy d[i] ≥ 0. BFS Analysis. We shall analyze the behavior of the BFS algorithm on a bigraph. A basic property that is implicit in the following discussion is that the BFS algorithm terminates – this is left as an Exercise. For instance, termination assures us that each vertex of the BFS tree will eventually become the front element of the queue. Let δ(v) ≥ 0 denote the depth of a vertex v in the BFS tree. Note that if v is visited from u, then δ(v) = δ(u) + 1. We first prove a simple lemma: Lemma 1 (Monotone 0 − 1 Property). Let the vertices in the queue Q at some time instance be (u1 , u2 , . . . , uk ) for some k ≥ 1, with u1 the earliest enqueued vertex and uk the last enqueued vertex. The following invariant holds: δ(u1 ) ≤ δ(u2 ) ≤ · · · ≤ δ(uk ) ≤ 1 + δ(u1 ). (5) Proof. The result is clearly true when k = 1. Suppose (u1 , . . . , uk ) is the state of the queue at the beginning of the while-loop, and (5) holds. In Line 2, we removed u1 and assign it to the variable u. Now the queue contains (u2 , . . . , uk ) and clearly, it satisfies the corresponding inequality δ(u2 ) ≤ δ(u3 ) ≤ · · · ≤ δ(uk ) ≤ 1 + δ(u2 ). Suppose in the for-loop, in Line 8, we enqueued a node v that is adjacent to u = u1 . Then Q contains (u2 , . . . , uk , v) and we see that δ(u2 ) ≤ δ(u3 ) ≤ · · · ≤ δ(uk ) ≤ δ(v) ≤ 1 + δ(u2 ) holds because δ(v) = 1+δ(u1 ) ≤ 1+δ(u2 ). In fact, every vertex v enqueued in this for-loop has this property. This proves the invariant (5). Q.E.D. This lemma shows that δ(ui ) is monotone non-decreasing. Indeed, δ(ui ) will remain constant throughout the list, except possibly for a single jump to the next integer. Thus, it has this “0 − 1 property”, that εj := δ(uj+1 ) − δ(uj ) = 0 or 1 for all j = i, . . . , k − 1. Moreover, there is at most one j such that εj = 1. From this lemma, we deduce other basic properties the BFS algorithm: Lemma 2. (i) For any edge u−v, |δ(u) − δ(v)| ≤ 1. (ii) For each vertex u in the BFS Tree, δ(u) = δ(s0 , u), i.e., δ(u) is the length of the shortest path from s0 to u. Proof. (i) Consider the edge u−v. Without loss of generality, assume that u was seen before v. If there is an instant in time when u and v both appear on the queue, then we are done, since δ(u) − δ(v) ≤ 1 from the monotone 0-1 property. Assume otherwise. Now there instant t1 when u reached the head of the queue. Then v is still unseen. But as we process the vertices adjacent to u, the vertex v will be seen from u and in this case δ(u) − δ(v) = 1. c Chee-Keng Yap Basic Version October 13, 2005 Lecture IV §4. Breadth First Search Page 12 (ii) Let π : (u0 −u1 −u2 − · · · −uk ) be a shortest path from u0 = s0 to uk = u of length k ≥ 1. It is sufficient to prove that δ(uk ) = k. For i ≥ 1, part(i) tells us that δ(ui ) ≤ δ(ui−1 ) + 1. This implies δ(uk ) ≤ k + δ(u0 ) = k. On the other hand, the inequality δ(uk ) ≥ k is immediate because, δ(s0 , uk ) = k by our choice of π, and δ(uk ) ≥ δ(s0 , uk ) because there is a path of length δ(uk ) from s0 to uk . Q.E.D. As corollary: if we print the vertices u1 , u2 , . . . , uk of the BFS tree, in the order that they are enqueued, this would represent a BFS listing. This is because δ(ui ) is non-decreasing with i, and δ(ui ) = δ(s0 , ui ). Another basic property is: Lemma 3. If δ(u) < δ(v) then u is VISITed before v is VISITed, and u is POSTVISITed before v is POSTVISITed. The edges of the graph G can be classified into the following types by the BFS Algorithm (cf. Figure 4(b)): • Tree edges: these are the edges of the BFS tree. • Level edges: these are edges between vertices in the same level of the BFS tree. E.g., edge bd in Figure 4(b). • Cross-Level edges: these are non-tree edges that connect vertices in two different levels. But note that the two levels differ by exactly one. E.g., edge cd in Figure 4(b). • Unseen edges: these are edges that are not used during the computation. The involved vertices not reachable from s0 . Each of these four types of edges can arise (see Figure 4(b) for tree, level and cross-level edges). But is the classification complete (i.e., exhaustive)? It is, because any other kind of edges must connect vertices at non-adjacent levels of the BFS tree, and this is forbidden by Lemma 2(i). Hence we have: Theorem 4. If G is a bigraph, the above classification of edges is complete. We will leave it as an exercise to fill in our BFS shell subroutines to produce the above classification of edges. Driver Program. In our BFS algorithm we assume that a source vertex s0 ∈ V is given. This is guaranteed to visit all vertices reachable from s0 . What if we need to process all vertices, not just those reachable from a given vertex? In this case, we write a “driver program” that repeatedly calls our BFS algorithm. We assume a global initialization which sets all vertices to unseen. Here is the driver program: BFS Driver Algorithm Input: G = (V, E) a graph. Output: Application-dependent. ⊲ Initialization: 1 Color all vertices as unseen. 2 GLOBAL INIT(G) ⊲ Main Loop: 3 For each vertex v in V do 4 if v is unseen then 5 call BFS((V, E; v)). c Chee-Keng Yap Basic Version October 13, 2005 §4. Breadth First Search Page 13 Lecture IV Note that with the BFS Driver, we add another shell subroutine called GLOBAL INIT to our collection (3). Time Analysis. Let us determine the time complexity of the BFS Algorithm and the BFS Driver program. We will discount the time for the application-specific subroutines; but as long as these subroutines are O(1) time our complexity analysis will remain valid. Also, it is assumed that the Adjacency List representation of graphs is used. The time complexity will be given as a function of n = |V | and m = |E|. Here is the time bound for the BFS algorithm: the initialization is O(1) time and the main loop is Θ(m′ ) where m′ ≤ m is the number of edges reachable from the source s0 . This giving a total complexity of Θ(m′ ). Next consider the BFS Driver program. The initialization is Θ(n) and line 3 is executed n times. For each actual call to BF S, we had shown that the time is Θ(m′ ) where m′ is the number of reachable edges. Summing over all such m′ , we obtain a total time of Θ(m). Here we use the fact the sets of reachable edges for different calls to the BFS subroutine are pairwise disjoint. Hence the Driver program takes time Θ(n + m). Application: Computing Connected Components. Suppose we wish to compute the connected components of a bigraph G. Assuming V = {1, . . . , n}, let us encode this task as computing an integer array C[1..n] satisfying the property C[u] = C[v] iff u, v belongs to the same component. Intuitively, C[u] is the name of the component that contains u. The component number is arbitrary. To accomplish this task, we assume a global variable called count that is initialized to 0 by GLOBAL INIT(G). Inside the BFS algorithm, the INIT(G, s0 ) subroutine simply increments the count variable. Finally, the VISIT(v, u) subroutine simply assigns C[v] ← count. The correctness of this algorithm should be clear. If we want to know the number of components in the graph, we can output the value of count at the end of the driver program. Application: Testing Bipartiteness. A bigraph G = (V, E) is bipartite if V can be partititioned into V = V1 ⊎ V2 such that E ⊆ (V1 × V2 ) ∪ (V2 × V1 ). It is clear that all cycles in a bipartite graphs must be even (i.e., has an even number of edges). In an exercise, we ask you to show the converse: if G has no odd cycles then G is bipartitite. We use the Driver Driver to call call BFS(V, E; s) for various s. It is sufficient to show how to detect odd cycles in the component of s. It is not immediate that if there is a cross-edge (u, v), then we have found an odd cycle. In the exercise, we ask you to show that all odd cycles is represented by such a cross-edge. It is now a simple matter to modify BFS to detect cross-edges. In trying to implement the Bipartitite Test above, and especially in recursive routines, it is useful to be able to jump out of nested subroutine calls. For this, the Java’s ability to throw exceptions and to catch exceptions is very useful. In our bipartite test, BFS can immediately throw an exception when its finds a cross-edge. This exception is caught by the BFS Driver program. Exercises Exercise 4.1: Prove that every vertex that is reachable from the source will be seen by BFS. c Chee-Keng Yap Basic Version ♦ October 13, 2005 Page 14 Lecture IV §5. Breadth First Search ♦ Exercise 4.2: Prove that the BFS algorithm terminates. Exercise 4.3: Show that each node is VISITed and POSTVISITed at most once. Is this true for PREVISIT as well? ♦ Exercise 4.4: Reorganize the BFS algorithm so that the coloring steps are folded into the shell subroutines of INIT, VISIT, etc. ♦ Exercise 4.5: Fill in the shell subroutines so that the BFS Algorithm will correctly classify every edge of the input bigraph. ♦ Exercise 4.6: Let G = (V, E; λ) be a connected bigraph in which each vertex v ∈ V has an associated value λ(v) ∈ R. (a) Give an algorithm to compute the sum v∈V λ(v). (b) Give an algorithm to label every edge e ∈ E with the value |λ(u) − λ(v)| where e = u−v. ♦ Exercise 4.7: Why does the above algorithm for computing the connected component of a bigraph fail when we apply it to a digraph? ♦ Exercise 4.8: Referring to the Bipartite Test in the text, prove the following assertions: (a) If a bigraph has no odd cycles, then it is bipartite. (b) If a connect graph has an odd cycle, then BFS search from any source vertex will detect a crossedge. (c) Write the pseudo code for bipartite test algorithm outlined in the text. This algorithm is to return YES or NO only. (d) Modify the algorithm in (c) so that in case of YES, it returns a Boolean array B[1..n] such that V0 = {i ∈ V : B[i] = false} and V1 = {i ∈ V : B[i] = true} is a witness to the bipartiteness of G. Moreover, in the case of NO, it returns an odd cycle. ♦ Exercise 4.9: (a) Let G = (V, E) be a connected bigraph. For any vertex v ∈ V define radius(v, G) := max distance(u, v) u∈V where distance(u, v) is the length of the shortest path from u to v. The center of G is the vertex v0 such that radius(v0 , G) is minimized. We call radius(v0 , G) the radius of G and denote it by radius(G). Define the diameter diameter(G) of G to be the maximum value of distance(u, v) where u, v ∈ V . Prove that 2 · radius(G) ≥ diameter(G). (b) Show that for every natural number n, there are graphs Gn and Hn such that n = radius(Gn ) = diameter(Gn ) and n = radius(Hn ) = ⌈diameter(Hn )/2⌉. (c) Using DFS, give an efficient algorithm to compute the diameter of a undirected tree (i.e., connected acyclic undirected graph). Please use the ”shell” subroutines for DFS. Be sure to prove the correctness of your algorithm. What is the complexity of your algorithm? (d) Same as (c) but now using BFS. ♦ Exercise 4.10: Conjecture why the BFS Algorithm is little-used in the processing of digraphs. c Chee-Keng Yap Basic Version ♦ October 13, 2005 Page 15 Lecture IV §5. Simple Depth First Search End Exercises §5. Simple Depth First Search The DFS algorithm turns out to be more subtle than BFS. In some applications, however, it is sufficient to use a simplified version that is as easy as the BFS algorithm. In fact, it might even be easier because we can exploit recursion. Here is an account of this simplified DFS algorithm. As in BFS, we use a 2-color scheme: each vertex is unseen or seen. We similarly define a DFS tree underlying any particular DFS computation: the edges of this tree are precisely those u−v such that v is seen from u. Starting the search from the source s0 , the idea is to go as deeply as possibly along any path without visiting any vertex twice. When it is no longer possible to continue a path, we backup towards the source s0 . But we only backup enough for us to go forward in depth again. In illustration, suppose G is the digraph in Figure 2(b), and s0 is vertex 1. One possible deepest path from vertex 1 is (1−5−2−3−6). From vertex 6, we backup to vertex 2, from where we can advance to vertex 3. Again we need to backup, and so on. The DFS tree is a trace of this search process; for our present example, we obtain the tree shown in Figure 5(a). 1 1 5 5 3 2 4 6 4 2 3 6 (b) (a) Figure 5: DFS Tree. The simple DFS algorithm is compactly presented using recursion as follows: Recursive DFS Input: G = (V, E; s0 ) a graph (bi- or di-) The vertices in V have been colored seen or unseen. Output Application dependent 1 Color s0 as seen. 2 for each v adjacent to s0 do 3 PREVISIT(v, s0 ) 4 if v is unseen then 5 VISIT(v, s0 ). 6 Simple DFS((V, E; v)) ⊳ Recursive call 7 POSTVISIT(s0 ). In this recursive version, there is no INIT(G, s0 ) step (we do not want to initialize G with every recursive call). The first call to this recursive algorithm must be made by some DFS Driver Program which must do the necessary setup, including initializing the vertex colors: c Chee-Keng Yap Basic Version October 13, 2005 Lecture IV §5. Simple Depth First Search Page 16 DFS Driver Input: G = (V, E) a graph (bi- or dip) Output: Application-specific 1 GLOBAL INIT(G) 2 Color each vertex in V as unseen. 3 for each v in V do 4 if v is unseen then 5 VISIT(v, nil) 6 Simple DFS((V, E; v)) ⊳ recursive call As in the BFS case, we view both the above algorithms as algorithmic skeletons, and their complete behaviour will depend on the specification of the shell subroutines, INIT, PREVISIT, VISIT POSTVISIT, GLOBAL INIT. (6) These shell subroutines will be null operations unless otherwise specified. DFS Tree. The root of the DFS tree is s0 , and the vertices of the tree are those vertices visited during this DFS search (see Figure 5(a)). This tree can easily be constructed by appropriate definitions of INIT(G, s0 ), VISIT(v−u) and POSTVISIT(u), and is left as an Exercise. We prove the following basic result. Lemma 5 (Unseen Path). Let u, v ∈ V . Then v is a descendent of u in the DFS tree if and only if at the time that u was first seen, there is3 a “unseen path” from u to v, i.e., a path (u− · · · −v) comprising only of unseen vertices. Proof. Let t0 be the time when we first see u. (⇒) We first prove the easy direction: if v is a descendent of u then there is an unseen path from u to v at time t0 . For, if there is a path (u−u1 − · · · −uk −v) from u to v in the DFS tree, then each ui must be unseen at the time we first see ui−1 (u0 = u and uk+1 = v). Let ti be the time we first see ui . Then we have t0 < t1 < · · · < tk+1 and thus each ui was unseen at time t0 . Here we use the fact that each vertex is initially unseen, and once seen, will never revert to unseen. (⇐) We remark that the inductive hypothesis is a little subtle (see Exercise for another proof, and also a wrong approach). The subtlety is that the DFS algorithm has its own order for visiting vertices adjacent to each u, and your induction must account for this order. First, we define a total order on all paths from u to v: If a, b are two vertices adjacent to a vertex u and we visit a before b, then we say “a

Valuable advice on finalizing your ‘How Do I Get Medical Power Of Attorney For A Child In Texas Form’ digitally

Are you fatigued by the inconvenience of managing paperwork? Your search ends with airSlate SignNow, the leading electronic signature solution for individuals and businesses. Bid farewell to the monotonous routine of printing and scanning documents. With airSlate SignNow, you can seamlessly complete and sign documents online. Take advantage of the extensive features included in this user-friendly and cost-effective platform and transform your method of document management. Whether you need to approve documents or collect electronic signatures, airSlate SignNow makes it all simple, needing just a few clicks.

Follow this comprehensive guide:

  1. Access your account or register for a complimentary trial with our service.
  2. Click +Create to upload a file from your device, cloud, or our form library.
  3. Open your ‘How Do I Get Medical Power Of Attorney For A Child In Texas Form’ in the editor.
  4. Select Me (Fill Out Now) to finalize the document on your end.
  5. Add and designate fillable fields for other participants (if necessary).
  6. Proceed with the Send Invite options to solicit eSignatures from others.
  7. Download, print your copy, or convert it into a reusable template.

Don’t be concerned if you need to collaborate with your team on your How Do I Get Medical Power Of Attorney For A Child In Texas Form or send it for notarization—our platform provides everything you need to accomplish these tasks. Register with airSlate SignNow today and elevate your document management to new levels!

Here is a list of the most common customer questions. If you can’t find an answer to your question, please don’t hesitate to reach out to us.

Need help? Contact Support
Medical power of attorney texas pdf
Medical power of attorney texas form
Medical Power of Attorney Texas for child
Free medical power of attorney Texas
Texas medical power of attorney Word document
Free medical power of Attorney forms To Print
Medical power of attorney rights and limitations Texas
Medical power of attorney form pdf
Sign up and try How do i get medical power of attorney for a child in texas form
  • Close deals faster
  • Improve productivity
  • Delight customers
  • Increase revenue
  • Save time & money
  • Reduce payment cycles