Techletter #56 | January 21, 2024
Strategies for problem-solving in Computer Science
Problem-solving strategies refer to systematic approaches or techniques used to tackle and resolve computational problems.
Some of the strategies are:
-
Iteration
The iterative strategy consists in using loops (e.g. for, while) to repeat a process until a condition is met. Each step in a loop is called an iteration.
-
Recursion
A function calls itself until it reaches a base case & then returns a value. In other words, a recursive function is a function that solves a problem by solving smaller instances of the same problem. Recursion is based on the principle of divide and conquer, where a complex problem is broken down into simpler subproblems.
-
Brute force
Consider all possible solutions and select the one that satisfies the problem constraints. While not always the most efficient approach, brute force can be useful for small input sizes or as a baseline for comparison with more sophisticated algorithms.
-
Backtrack
Backtracking is a problem-solving strategy where you explore different possibilities and, if you hit a roadblock, you go back to the previous decision point and try a different option. It’s often used when you’re trying to find a solution to a problem with a lot of choices, and you want to efficiently explore those choices without wasting time on paths that lead to dead ends. It’s like navigating through options until you find the right one.
-
Heuristics
A heuristic method, or simply a heuristic, is a method that leads to a solution without guaranteeing it is the best or optimal one. A very common heuristic approach to problems is the greedy approach. It consists in never coming back to previous choices. It’s the opposite of backtracking. Try to make the best choice at each step, and don’t question it later.
-
Divide & Conquer
Break down a bigger problem in several small parts & then conquer it.
-
Dynamic Programming
Dynamic programming is identifying repeated subproblems in order to compute them only once.
-
Branch & Bound
Many problems involve minimizing or maximizing a target value: find the shortest path, get the maximum profit, etc. They’re called
optimization problems
. When the solution is a sequence of choices, we often use a strategy called branch and bound. Its aim is to gain time by quickly detecting and discarding bad choices.
How does the event loop work in nodejs? A simplified guide
One of the reasons for the high performance of nodejs is the event loop. It helps in managing asynchronous operations.
- When Node.js starts, it initializes the event loop to watch for I/O operations and other asynchronous tasks.
- Any task or I/O operation is added to a queue, which can be either the
microtask queue
or themacrotask/Callback queue
. - The event loop iteratively checks for tasks in the queue while also waiting for I/O and timers.
- When the event loop detects tasks in the queue, it executes them in specific phases, ensuring order efficiency.
Event Loop Phases
- Timers: Manages timer events for scheduled tasks.
- Pending callbacks: Handles system events such as I/O, which are typically queued by the kernel.
- Idle / prepare: Ensures internal actions are managed before I/O events handling.
- Poll: Retrieves New I/O events.
- Check: Executes ‘setImmediate’ functions.
- Close: Handles close events, such as ‘socket.close’.
Yarn vs npm
npm uses npm i
to install packages and they are installed sequentially and it generates package-lock.json.
yarn uses yarn to install packages and they are installed parallely, so it’s quicker than npm.
What did I watch this week?
- System Design Mock Interview
- Latency Vs Throughput
- Advice from a Whatsapp engineer
- Building a flying umbrella
- Sam’s advice for people in their 20s
- Vim as your editor
What did I Read this week?
- System Design: DoorDash — a prepared food delivery service
- Yarn vs npm
- How chad built a No-code startup
- Product management is broken, a change is coming