# Recursive algorithms

What is Recursion? The process in which a function calls itself directly or indirectly is called recursion and the corresponding function is called as recursive function. Using recursive algorithm, certain problems can be solved quite easily. What is base condition in recursion? In the recursive program, the solution to the base case is provided and the solution of the bigger problem is expressed in terms of smaller problems.

How a particular problem is solved using recursion? The idea is to represent a problem in terms of one or more smaller problems, and add one or more base conditions that stop the recursion. For example, we compute factorial n if we know factorial of n Why Stack Overflow error occurs in recursion? If the base case is not reached or not defined, then the stack overflow problem may arise. Let us take an example to understand this.

Auspicious date for cesarean birth

If fact 10 is called, it will call fact 9fact 8fact 7 and so on but the number will never reach So, the base case is not reached. If the memory is exhausted by these functions on the stack, it will cause a stack overflow error. What is the difference between direct and indirect recursion? A function fun is called direct recursive if it calls the same function fun. Difference between direct and indirect recursion has been illustrated in Table 1.

What is difference between tailed and non-tailed recursion? A recursive function is tail recursive when recursive call is the last thing executed by the function. Please refer tail recursion article for details. How memory is allocated to different function calls in recursion? When any function is called from mainthe memory is allocated to it on the stack. A recursive function calls itself, the memory for a called function is allocated on top of memory allocated to calling function and different copy of local variables is created for each function call.

When the base case is reached, the function returns its value to the function by whom it is called and memory is de-allocated and the process continues. Let us take the example how recursion works by taking a simple function. When printFun 3 is called from mainmemory is allocated to printFun 3 and a local variable test is initialized to 3 and statement 1 to 4 are pushed on the stack as shown in below diagram.

In statement 2, printFun 2 is called and memory is allocated to printFun 2 and a local variable test is initialized to 2 and statement 1 to 4 are pushed in the stack. Similarly, printFun 2 calls printFun 1 and printFun 1 calls printFun 0. Remaining statements of printFun 1 are executed and it returns to printFun 2 and so on.

In the output, value from 3 to 1 are printed and then 1 to 3 are printed. The memory stack has been shown in below diagram. For basic understanding please read the following articles. Basic understanding of Recursion. Mathematical Equation:. Here is the recursive tree for input 5 which shows a clear picture of how a big problem can be solved into smaller ones. The time complexity of the given program can depend on the function call. Mathematical Eqution:.In computer sciencerecursion is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem.

Recursion solves such recursive problems by using functions that call themselves from within their own code.

## Analysis of Recursive Algorithms

The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science. The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions. Most computer programming languages support recursion by allowing a function to call itself from within its own code.

Some functional programming languages [ which? It is proved in computability theory that these recursive-only languages are Turing complete ; this means that they are as powerful they can be used to solve the same problems as imperative languages based on control structures such as while and for. Repeatedly calling a function from within itself may cause the call stack to have a size equal to the sum of the input sizes of all involved calls. It follows that, for problems that can be solved easily by iteration, recursion is generally less efficientand, for large problems, it is fundamental to use optimization techniques such as tail call optimization.

A common computer programming tactic is to divide a problem into sub-problems of the same type as the original, solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method ; when combined with a lookup table that stores the results of solving sub-problems to avoid solving them repeatedly and incurring extra computation timeit can be referred to as dynamic programming or memoization. A recursive function definition has one or more base casesmeaning input s for which the function produces a result trivially without recurringand one or more recursive casesmeaning input s for which the program recurs calls itself.

For example, the factorial function can be defined recursively by the equations 0! Neither equation by itself constitutes a complete definition; the first is the base case, and the second is the recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the "terminating case". The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designed recursive function, with each recursive call, the input problem must be simplified in such a way that eventually the base case must be reached.

Functions that are not intended to terminate under normal circumstances—for example, some system and server processes —are an exception to this. Neglecting to write a base case, or testing for it incorrectly, can cause an infinite loop.

Such an example is more naturally treated by corecursion[ how? Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is a technique for representing data whose exact size is unknown to the programmer : the programmer can specify this data with a self-referential definition.

There are two types of self-referential definitions: inductive and coinductive definitions. An inductively defined recursive data definition is one that specifies how to construct instances of the data. For example, linked lists can be defined inductively here, using Haskell syntax :. The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings.

The self-reference in the definition permits the construction of lists of any finite number of strings. Another example of inductive definition is the natural numbers or positive integers :. Similarly recursive definitions are often used to model the structure of expressions and statements in programming languages. Language designers often express grammars in a syntax such as Backus—Naur form ; here is such a grammar, for a simple language of arithmetic expressions with multiplication and addition:.

This says that an expression is either a number, a product of two expressions, or a sum of two expressions. A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically, self-referential coinductive definitions are used for data structures of infinite size.

A coinductive definition of infinite streams of strings, given informally, might look like this:. This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how to access the contents of the data structure—namely, via the accessor functions head and tail —and what those contents may be, whereas the inductive definition specifies how to create the structure and what it may be created from.

Corecursion is related to coinduction, and can be used to compute particular instances of possibly infinite objects.

Powerpoint slide size px

As a programming technique, it is used most often in the context of lazy programming languages, and can be preferable to recursion when the desired size or precision of a program's output is unknown.So the recursive defintiion n!

Algorithm F n. Basic operation? Solve by the method of backward substitutions. Write recursive relation for the number of basic operation. Don't forget the initial conditions IC. Problem size is nthe number of discs. Where did the exponential term come from? Because two recursive calls are made. Suppose three recursive calls are made, what is the order of growth. Lesson learned: Be careful of the recursive algorithm, they can grow exponential.

Especial if the problem size is measured by the level of the recursive tree and the operation count is total number of nodes. Algorithm BinRec n. The division and floor function in the argument of the recursive call makes the analysis difficult. The smoothness rule see appendix B says that is ok.

A classic example for more elaborate recursion relations. Why second-order linear?

Demag dkun manual

Problem size is nthe sequence number for the Fibonacci number. In general solution to the inhomogeneous problem is equal to the sum of solution to homogenous problem plus solution only to the inhomogeneous part.

The undetermined coefficients of the solution for the homogenous problem are used to satisfy the IC. A n is solution to complete inhomogeneous problem.

B n is solution to homogeneous problem. I n solution to only the inhomogeneous part of the problem. We guess at I n and then determine the new IC for the homogenous problem for B n. The same as the relation for F n with different IC. We do not really need the exact solution; We can conclude. There is the Master Theorem that give the asymptotic limit for many common problems.

Algorithm Fib n. Analysis of Recursive Algorithms What is a recursive algorithm? Example: Factorial n! This is typical. Specify problem size 2. Identify basic operation 3. Worst, best, average case 4. Don't forget the initial conditions IC 5. Problem size is nthe number of discs 2.

The basic operation is moving a disc from rod to another 3. There is no worst or best case 4. Can we do it better?Some computer programming languages allow a module or function to call itself. This technique is known as recursion. A recursive function can go infinite like a loop.

Many programming languages implement recursion by means of stacks. Generally, whenever a function caller calls another function callee or itself as callee, the caller function transfers execution control to the callee. This transfer process may also involve some data to be passed from the caller to the callee. This implies, the caller function has to suspend its execution temporarily and resume later when the execution control returns from the callee function. Here, the caller function needs to start exactly from the point of execution where it puts itself on hold.

It also needs the exact same data values it was working on. For this purpose, an activation record or stack frame is created for the caller function. This activation record keeps the information about local variables, formal parameters, return address and all information passed to the caller function.

One may argue why to use recursion, as the same task can be done with iteration. The first reason is, recursion makes a program more readable and because of latest enhanced CPU systems, recursion is more efficient than iterations.

In case of iterations, we take number of iterations to count the time complexity. Likewise, in case of recursion, assuming everything is constant, we try to figure out the number of times a recursive call is being made. Space complexity is counted as what amount of extra space is required for a module to execute.

In case of iterations, the compiler hardly requires any extra space. The compiler keeps updating the values of variables used in the iterations. But in case of recursion, the system needs to store activation record each time a recursive call is made. Hence, it is considered that space complexity of recursive function may go higher than that of a function with iteration.

Data Structure - Recursion Basics Advertisements. Previous Page. Next Page. Previous Page Print Page. Dashboard Logout.Recursion means "defining a problem in terms of itself". This can be a very powerful tool in writing algorithms.

Recursion comes directly from Mathematics, where there are many examples of expressions written in terms of themselves.

Jenan arabia

Recursion is the process of defining a problem or the solution to a problem in terms of a simpler version of itself. Here the solution to finding your way home is two steps three steps. First, we don't go home if we are already home. Secondly, we do a very simple action that makes our situation simpler to solve.

Finally, we redo the entire algorithm. The above example is called tail recursion. This is where the very last statement is calling the recursive algorithm. Tail recursion can directly be translated into loops. Another example of recursion would be finding the maximum value in a list of numbers. The maximum value in a list is either the first number or the biggest of the remaining numbers.

Here is how we would write the pseudocode of the algorithm:. The "work toward base case" is where we make the problem simpler e. The recursive call, is where we use the same algorithm to solve a simpler version of the problem.

Wine shop licence price in maharashtra in marathi

The base case is the solution to the "simplest" possible problem For example, the base case in the problem 'find the largest number in a list' would be if the list had only one number Adding three numbers is equivalent to adding the first two numbers, and then adding these two numbers again.

Note, in Matlab, a function can be called without all the arguments. The nargin function tells the computer how many values were specified.

### Data structure - What is an recursive algorithm?

This reduces the number of parameters nargin sent in to the function from 3 to 2, and 2 is the base case! In a recursive algorithm, the computer "remembers" every previous state of the problem. This information is "held" by the computer on the "activation stack" i. Consider a rectangle grid of rooms, where each room may or may not have doors on the North, South, East, and West sides. How do you find your way out of a maze? Here is one possible "algorithm" for finding the answer:.

## Recursion (computer science)

The "trick" here is of course, how do we know if the door leads to a room that leads to the exit? The answer is we don't but we can let the computer figure it out for us. What is the recursive part about the above algorithm?

Its the "door leads out of the maze". How do we know if a door leads out of the maze? We know because inside the next room going through the doorwe ask the same question, how do we get out of the maze? What happens is the computer "remembers" all the "what ifs".Recursion adjective: recursive occurs when a thing is defined in terms of itself or of its type.

Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematics and computer sciencewhere a function being defined is applied within its own definition. While this apparently defines an infinite number of instances function valuesit is often done in such a way that no infinite loop or infinite chain of references can occur.

In mathematics and computer science, a class of objects or methods exhibits recursive behavior when it can be defined by two properties:. For example, the following is a recursive definition of a person's ancestor. One's ancestor is either:. The Fibonacci sequence is another classic example of recursion:. Many mathematical axioms are based upon recursive rules. For example, the formal definition of the natural numbers by the Peano axioms can be described as: "Zero is a natural number, and each natural number has a successor, which is also a natural number.

Other recursively defined mathematical objects include factorialsfunctions e. There are various more tongue-in-cheek definitions of recursion; see recursive humor. Recursion is the process a procedure goes through when one of the steps of the procedure involves invoking the procedure itself.

A procedure that goes through recursion is said to be 'recursive'. To understand recursion, one must recognize the distinction between a procedure and the running of a procedure. A procedure is a set of steps based on a set of rules, while the running of a procedure involves actually following the rules and performing the steps. Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution of some other procedure. When a procedure is defined as such, this immediately creates the possibility of an endless loop; recursion can only be properly used in a definition if the step in question is skipped in certain cases so that the procedure can complete.

But even if it is properly defined, a recursive procedure is not easy for humans to perform, as it requires distinguishing the new from the old, partially executed invocation of the procedure; this requires some administration as to how far various simultaneous instances of the procedures have progressed. For this reason, recursive definitions are very rare in everyday situations.

Linguist Noam Chomskyamong many others, has argued that the lack of an upper bound on the number of grammatical sentences in a language, and the lack of an upper bound on grammatical sentence length beyond practical constraints such as the time available to utter onecan be explained as the consequence of recursion in natural language.

This can be understood in terms of a recursive definition of a syntactic category, such as a sentence. A sentence can have a structure in which what follows the verb is another sentence: Dorothy thinks witches are dangerousin which the sentence witches are dangerous occurs in the larger one. So a sentence can be defined recursively very roughly as something with a structure that includes a noun phrase, a verb, and optionally another sentence.

This is really just a special case of the mathematical definition of recursion. This provides a way of understanding the creativity of language—the unbounded number of grammatical sentences—because it immediately predicts that sentences can be of arbitrary length: Dorothy thinks that Toto suspects that Tin Man said that There are many structures apart from sentences that can be defined recursively, and therefore many ways in which a sentence can embed instances of one category inside another.

Recursion plays a crucial role not only in syntax, but also in natural language semantics. The word andfor example, can be construed as a function that can apply to sentence meanings to create new sentences, and likewise for noun phrase meanings, verb phrase meanings, and others.

It can also apply to intransitive verbs, transitive verbs, or ditransitive verbs. In order to provide a single denotation for it that is suitably flexible, and is typically defined so that it can take any of these different types of meanings as arguments. This can be done by defining it for a simple case in which it combines sentences, and then defining the other cases recursively in terms of the simple one.

A recursive grammar is a formal grammar that contains recursive production rules. Recursion is sometimes used humorously in computer science, programming, philosophy, or mathematics textbooks, generally by giving a circular definition or self-referencein which the putative recursive step does not get closer to a base case, but instead leads to an infinite regress.

It is not unusual for such books to include a joke entry in their glossary along the lines of:. A variation is found on page in the index of some editions of Brian Kernighan and Dennis Ritchie 's book The C Programming Language ; the index entry recursively references itself "recursion 86, ".

It did not appear in the first edition of The C Programming Language. The joke is part of the Functional programming folklore and was already widespread in the functional programming community before the publication of the aforementioned books. Another joke is that "To understand recursion, you must understand recursion.Recursion is a powerful problem solving tool. In this lesson we consider few well-known recursive algorithms.

We present them first, since it is easy to understand why they are recursive. Recursive definitions are in fact mathematical definitions that can be directly translated into code and also prove the correctness. Let us start with Binary Search Algorithm.

Looking up a word in the telephone directory? Here is the perfect algorithm. Find the middle of the directory. If the word you are searching is alphabetically higher than the middle word, then focus only on the right half of the book, else focus on the left half of the book.

Who knows, you may be lucky and find the word in the middle. The key here is that every search effort reduces the search area by a half. So if we are dealing with a data set of size 10 20 that is about a million recordsthe word can be found or we can determine that the word is not in the directory in no more than 20 comparisons. Only assumption you are making here is that directory is a sorted list. Here is an algorithm for searching a dictionary or directory. Factorial is an important mathematical function. A recursive definition of factorial is as follows. The above definition can be directly translated into code as follows. Note that we have a base case as well as a recursive case.

Call graph for fib 6 :. Recursive Algorithms Recursion is a powerful problem solving tool. Binary Search Algorithm Looking up a word in the telephone directory? It does have a terminal case to end recursion and it does express the solution to larger problem using solution to one or smaller sub problems.

One of the important assumptions we made about the binary search here is that list is sorted.