In the realm of computer science and algorithm design, dynamic programming stands as an influential problem-solving technique—from optimizing resource allocation to finding the shortest path in the graph. Algorithm programming empowers us to solve intricate challenges by breaking them down into smaller problems.
This article aims to reveal the essence of the dynamic programming language, by shedding the light on its referring characteristics, fundamentals and real-life examples. Let’s start our journey from the point of what is the dynamic programming.
What is Dynamic Programming in General?
Dynamic programming is a special approach to problem-solving. Unfortunately, there is no single definition of dynamic programming, but let’s try to form one by ourselves. The idea is that the optimal solution can often be found by considering all possible ways of solving an issue and choosing among them the best one.
The operation of dynamic programming is very similar to recursion with memoization of intermediate solutions, which is also called memoization. Recursive algorithms tend to divide a large concern into smaller subtasks and solve them. Dynamic algorithms divide the difficulty into chunks and compute them one at a time, building up solutions step by step. Therefore, dynamic algorithms can be thought of as recursion, working from the bottom up.
It’s about Richard Bellman, who invented and established the concept of “dynamic programming” in the scientific community. In 1940, he used the term for issues where the solution to one part of the problem depended on another. Then in 1953, Bellman modified and expanded the definition of dynamic programming to its current form.
He describes the naming situation of this concept in an amusing way. Bellman had to come up with a name, but the official representative to whom he reported at the time strongly hated (the mathematician emphasizes that he “hated”) the terms and did not want to deal with them. Every time, all the employees had to literally tremble with fear because their boss couldn’t understand something and stirred it with mud in his unfunny puns. And especially the math terms!
According to the book “Dynamic Programming”, Richard Bellman spent a lot of time and effort coming up with a name. The word “programming” was chosen as an analog to the word “planning”, which did not fit for a number of different reasons (the Soviets had planned something all the time).
And the word “dynamic” was decided based on the fact that, in addition to conveying the essence of the approach, it was hard to come up with something derogatory, or preparatory with it. Bellman didn’t want an executive to somehow obfuscate his term. Not even an official could have done it so easily (readers, of course, will find a couple of variations). This is how the term “dynamic programming” was formed.
Summing up, dynamic programming is an approach to solving problems, where the original problem is broken down into smaller subtasks that are easier to solve. And then smaller solutions of these subtasks can be used to fix the original error. Nevertheless, the approach has some essential features:
- It makes sense if the solutions of subtasks can be used effectively (to memorize, for example);
- subtasks have a common structure, so some homogeneous methods can be applied to their solution, instead of solving each separately with different algorithms.
Advantages and Disadvantages of DP for Business
The main idea of dynamic programming algorithms in machine learning should be traced in the presented example: we sacrifice a solid amount of memory (O(nm) instead of O(1)), but get just a crazy time gain (O(nm) vs O(3^n)). Let us list all the key features of dynamic programming.
- Speed. Unsolvable issues become solvable, in most cases, in squared time! One operation to fill each table cell and the concern is solved.
- Universality. One Ring to rule them all—creating a compact system of multiple rules for filling the table guarantees the solution of the problem on any data. No exceptions, no boundary cases, a few lines, and the complicated difficulty is solved.
- Accuracy. Since the dynamic programming algorithm considers absolutely all possible options and scenarios, it is guaranteed to find the best solution. No loss of precision, no approximate answers. If there is a solution—it will be found.
- Memory. Most dynamic programming algorithms take time to build and fill tables that consume memory resources. This can be an issue if the tables themselves are massive, or if you need to build a lot of such tables and keep them all in memory to solve a problem.
- Cognitive load. Solving a tangled issue with a compact system of rules is a very tempting idea. But there is a nuance here: one must learn to “think in dynamic programming” to compose or at least understand these systems of rules. This is the reason for the rather controversial reputation of dynamic programming.
How to Apply Dynamic Programming Algorithms to Solve Problems
The demonstration subject will be a classical dynamic programming concern—Levenshtein Distance. Despite the seemingly complex name, in fact, its difficulty is in the transformation of one word into another by adding, deleting and replacing letters with a minimum number of operations.
This concern of dynamic programming can be formulated as follows: find the minimum “distance” between two words. The distance in this case will be the minimum number of operations to be applied to the first word to get the second (or vice versa).
We have three available operations:
- insert – add one letter anywhere in the word, including its beginning and ending
- delete – remove one letter from any place in the word
- replace – replace one letter in a certain place with another
All these operations have an equal cost: +1 to the distance between words.
Take, for example, two simple words, MONEY and MONKEY. What is the minimum number of operations needed to turn MONEY into MONKEY? The shrewd human eye quickly guesses one: add a K between the third and the fourth letter.
Let’s take a more complicated case. Try turning SUNDAY into SATURDAY, and we see that the number of combinations of dynamic programming to search is potentially gigantic. If we solve the problem by brute force, we can consider all possible combinations, like in the password-cracking example. Instead of a possible 94 candidate characters, we have three operations – insert, delete, and replace. Three combinations for the first letter, 3×3 for two letters, 3×3×3 for three letters, and 3^N for N letters.
Memoization and tabulation
Dynamic programming memoization is an optimization technique that allows you to memorize the results and all algorithms of calculations and reuse them when you need to repeat this calculation. A favorite example: when calculating the seventh Fibonacci number, you can memorize the fourth Fibonacci number and not calculate it every time, but only once. This technique is also described as top-down.
Tabulation is an optimization technique that starts by solving subtasks with the simplest one and then, as you progress, deals with more and more complex subtasks until the main problem is solved. In doing so, solutions to simpler subtasks are used to solve more complex subtasks. In contrast to memoization, this approach is called “bottom-up” because you first take on the simplest tasks.
Do Dynamic Programming Algorithms Solve Real-World Problems?
Yes, of course! Dynamic programming algorithms solve problems in the real world perfectly. It’s not effortless to see, though.
One of the most obvious tasks is building a route that goes through multiple points. Online map apps and cab services often run into this kind of issue (it’s likely that some services use other APIs). For example, you want to take a group of friends for a cab ride after a fun party. Well, how do you know who to send home first and through which streets to go? This is where our dynamic programming works with graph theory. From classical DP issues, this intersects with the problem of the traveling salesman.
Accordingly, a similar example in computer networks is when a packet of data needs to be delivered to several recipients at once. The algorithm needs to figure out which route and in what order to deliver the data to all destinations.
The diff utility is also a prime example of the use of dynamic programming. Since the goal is to find similar substrings in two strings, we can clearly see one of the classic dynamic programming issues: finding the largest common subsequence.
Another interesting java algorithm example. Here‘s a story about how a related problem with diff has to be solved in a real application. On the front! These guys are compressing images using dynamic programming. You all know that when compressing an illustration, they use approaches where similar places/pixels are written through less information. But the example above describes an approach of how you can compress an image with a modification of the image content and still retain a high level of information content. That is, if the meaning of the photo was “a dog with a cat on the lawn”, then after cutting it will be the same. And this is not an easy task!
Involved programmers should first separate the business logic from the algorithmic part when using this approach. So that they do not have to explain to people why they missed the bug where they took an element with index 100 (arr) from an array of size 100. But this, of course, does not only apply to dynamic programming algorithms.
Yes, in the world of real outsourcing/outstaffing tasks, you don’t often have to solve problems using dynamic programming. And you don’t have to try to figure out where to apply it. It’s just enough that you already know about this approach, understand how it works, and know where it can be used in real life.
Applications of Dynamic Programming to Algorithmic Problems
To get a better understanding of dynamic programming, we will give an example of solving three different problems. Their complexity increases (the first one is the easiest, the last one is the most difficult).
Task 1: Climbing the ladder
You are climbing a ladder. To get to the top, you have to climb N steps. You can climb one or two stairs per step.
Let us use our algorithm. So, let us have the function dynamic programming_steps(N) – it returns the number of different ways to climb, depending on N steps.
Next, see how the solution to the difficulty behaves depending on the different number of steps.
- DP_steps(1) = 1. Since there is only one way to go up one step – to traverse one step in one step.
- DP_steps(2) = 2. That is, there are two ways to go up two steps. The first way is to take two steps in one step. The second way is to take one step in two steps.
Task 2. Find the size of the longest strictly increasing subsequence
A subsequence is one more sequence in a dynamic programming algorithm that can be obtained from the original one by removing some or no (!) elements from the original sequence.
For example, [2, 4, 1] is a subsequence of [1, 2, 5, 7, 4, 2, 1]. For a sequence to be strictly increasing, each of its elements must be strictly greater than the previous one.
The first thing you can think of is to go through all possible sequences. That is, iterate through all its numbers and every time we come across a number that breaks an ascending sequence, call the method of searching for the longest subsequence starting from that number again.
For example, if we have an array [3, 4, 1, 7] in dynamic programming. And we iterate over it:
- The first element is 3, all good.
- The second element is 4, the sequence is increasing, and everything is good. The length is 2.
- The third element is 1. Yep, the increasing subsequence is broken, we can’t add 1 to it.
Then we start a new search for a strictly increasing sequence, but starting with 1. That is, we call our method for the array [1, 7]. And so on.
In the end, we simply return the size of the largest strictly increasing subsequence. Clearly, if the array is strictly decreasing (for example, [7, 6, 5, 4, 3, 2, 1]), our function will be called recursively many times.
Task 3. About finding the N ugly number
Solution dynamic programming algorithm: go through all numbers in a row, starting with one. At the same time, count the number of ugly numbers encountered. When we get to the right number, we return it.
The above algorithm has the function is_number_ugly. This is a simple function that checks for prime divisors of a number. If the prime divisors are only 2, 3 and 5, this function returns true, and otherwise, it returns false.
The main drawback of this algorithm is that for every number we need to check whether it is ugly or not. And this task is not done in a constant amount of time, so we want to somehow avoid such calculations.
Let’s sum up the information from our new article, dynamic programming is an approach to solving algorithmic problems that can greatly reduce the running time of programs. It potentially uses a non-constant amount of memory (that is, the larger the difficulty is—the more memory is needed to solve it). But memory costs are often negligible compared to the acceleration we get.
Dynamic programming algorithms are rarely used in the day-to-day tasks of software development engineers and is not a trivial/native approach. But understanding that such a method exists and that it can be used to improve program performance significantly is good both for solving specific problems and for a specialist’s worldview.