Skip to content

Commit dd8fecf

Browse files
README updates.
1 parent 1514489 commit dd8fecf

12 files changed

Lines changed: 24 additions & 22 deletions

File tree

ch01-intuition_of_ai/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,4 +4,4 @@ Intelligence is a mystery. Intelligence is a concept that has no agreed upon def
44
This directory does not contain any code since there are no specific algorithm implementations discussed in Chapter 1.
55

66
## Summary
7-
![Chapter 1 Summary](readme_assets/Ch1-Summary.png)
7+
![Chapter 1 summary](readme_assets/Ch1-Summary.png)

ch02-search_fundamentals/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,4 @@ Think about when you explore things you want to learn. Some might look at a wide
77
![BFS and DFS](readme_assets/Bfs-Dfs-Combined.png)
88

99
## Summary
10-
![Chapter 2 Summary](readme_assets/Ch2-Summary.png)
10+
![Chapter 2 summary](readme_assets/Ch2-Summary.png)

ch03-intelligent_search/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,14 +6,14 @@ A* search is pronounced as “A star search”. The A* algorithm usually improve
66

77
Total cost is calculated using two metrics: the total distance from the start node to the current node, and the estimated cost of moving to a specific node by utilizing a heuristic. When attempting to minimize cost, a lower value will indicate a better performing solution.
88

9-
![A Star Function](readme_assets/A-Star_Function.png)
9+
![A star function](readme_assets/A-Star_Function.png)
1010

1111
## Adverserial Search
1212
Adversarial problems require one to anticipate, understand, and counteract the actions of the opponent in pursuit of a goal. Some examples of adversarial problems include two-player turn-based games like chess, tick-tack-toe, and connect four.
1313

1414
Min-max search aims to build a tree of possible outcomes based on moves that each player could make and favor paths that are advantageous to the agent, whilst avoiding paths that are favorable for the opponent. It does this by simulating possible moves and scoring the state based on a heuristic after making the respective move. Min-max attempts to discover as many states in the future as possible, however, due to memory and computation limitations, discovering the entire game tree may not be realistic, so it will search until a specified depth.
1515

16-
![Min-max Search](readme_assets/Min_max-Simple-full.png)
16+
![Min-max search](readme_assets/Min_max-Simple-full.png)
1717

1818
## Summary
19-
![Chapter 3 Summary](readme_assets/Ch3-Summary.png)
19+
![Chapter 3 summary](readme_assets/Ch3-Summary.png)

ch04-evolutionary_algorithms/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ The theory of evolution suggests that organisms evolve through reproduction by p
44
## Genetic Algorithm
55
The example used is the Knapsack Problem. The Knapsack Problem has number of items that can be placed into the bag. A simple way to describe a possible solution that contains some items but not others is binary encoding. Binary encoding would represent excluded items with 0s and included items with 1s. If for example at gene index 3, the value is 1, that item is marked to be included. The complete binary string would always be the same size – this is the total number of items available for selection.
66

7-
![Knapsack Encoding](readme_assets/Knapsack_Encoding.png)
7+
![Knapsack encoding](readme_assets/Knapsack_Encoding.png)
88

99
## Summary
10-
![Chapter 4 Summary](readme_assets/Ch4-Summary.png)
10+
![Chapter 4 summary](readme_assets/Ch4-Summary.png)

ch05-advanced_evolutionary_approaches/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@
22
Nothing to see here. Refer to the code in ch04-evolutionary-algorithms.
33

44
## Summary
5-
![Chapter 5 Summary](readme_assets/Ch5-Summary.png)
5+
![Chapter 5 summary](readme_assets/Ch5-Summary.png)

ch06-swarm_intelligence-ants/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ Similarly to the theory of evolution, the observation of the behavior of lifefor
33

44
The Ant Colony Optimization algorithm is inspired by the behavior of ants moving between destinations, dropping pheromones and acting on pheromones that they come across. The emergent behavior is ants converging to paths of least resistance.
55

6-
![ACO Update](readme_assets/ACO-Update.png)
6+
![ACO update](readme_assets/ACO-Update.png)
77

88
## Summary
9-
![Chapter 6 Summary](readme_assets/Ch6-Summary.png)
9+
![Chapter 6 summary](readme_assets/Ch6-Summary.png)
Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Chapter 7 - Swarm Intelligence - Particles
22
Particle swarm optimization involves a group of individuals at different points in the solution space, all using real-life swarm concepts to find an optimal solution in the space. Imagine a swarm of bees that spread out looking for flowers and gradually converging on an area that has the most density of flowers. As more bees find the flowers, more are attracted to it. At its core, this is what particle swarm optimization entails. Particles make velocity adjustments based on an inertia component, cognitive component, and social component.
33

4-
![PSO Velocity Update](readme_assets/PSO-Velocity-Vis.png)
4+
![PSO velocity update](readme_assets/PSO-Velocity-Vis.png)
55

66
## Summary
7-
![Chapter 6 Summary](readme_assets/Ch7-Summary.png)
7+
![Chapter 6 summary](readme_assets/Ch7-Summary.png)

ch08-machine_learning/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,13 @@ One of the most common techniques in traditional machine learning is supervised
66
## Liner Regression
77
Linear regression is one of the simplest machine learning algorithms that finds relationships between two variables and allows us to predict one variable given the other. An example of this is predicting the price of a diamond based on its caret value. By looking at many examples of known diamonds including their price and caret values, we can teach a model the relationship and ask it to estimate predictions.
88

9-
![Linear Regression Example](readme_assets/Possible-regression-lines.png)
10-
![Linear Regression Calculation](readme_assets/Calculating-regression-line.png)
9+
![Linear regression example](readme_assets/Possible-regression-lines.png)
10+
![Linear regression calculation](readme_assets/Calculating-regression-line.png)
1111

1212
## Decision Trees
1313
Decision trees are structures that describe a series of decisions that are made to find a solution to a problem. If we’re deciding whether or not to wear shorts for the day, we might make a series of decisions to inform the outcome. Will it be cold during the day? If not, will we be out late in the evening when it does get cold? We might decide to wear shorts on a warm day, but not if we will be out when it gets cold.
1414

15-
![Decision Tree Example](readme_assets/cl_human_diamond_tree.png)
15+
![Decision tree example](readme_assets/cl_human_diamond_tree.png)
1616

1717
## Summary
18-
![Chapter 8 Summary](readme_assets/Ch8-Summary.png)
18+
![Chapter 8 summary](readme_assets/Ch8-Summary.png)

ch09-artificial_neural_networks/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,9 @@ The general flow for forward propagation includes the following steps:
1010
5. Sum results of weighted outputs of hidden nodes to the output node: Sum the weighted results of the activation function from all hidden nodes.
1111
6. Activation function for output node: Apply an activation function on the result of the summed weighted hidden node results.
1212

13-
![Forward Propagation 1](readme_assets/Ex-ANN-exercise-solution-1.png)
13+
![Forward propagation 1](readme_assets/Ex-ANN-exercise-solution-1.png)
1414

15-
![Forward Propagation 2](readme_assets/Ex-ANN-exercise-solution-2.png)
15+
![Forward propagation 2](readme_assets/Ex-ANN-exercise-solution-2.png)
1616

1717
## Back Propagation
1818
Phase A: Setup
@@ -24,7 +24,7 @@ Phase C: Training
2424
2. Update weights in the ANN: The weights of the ANN are the only thing that can be adjusted by the network itself. The architecture and configurations that we defined in phase A doesn’t change during training the network. The weights are essentially encoding the “intelligence” of the network. Weights are adjusted to be larger or smaller which impacts the strength of the inputs.
2525
3. Stopping condition: Training cannot happen indefinitely. Similarly to many of the algorithms explored in this book, a sensible stopping condition needs to be determined. If we have a large dataset, we might decide that we will use 500 examples in our training dataset over 1000 iterations to train the ANN. This means that the 500 examples will be passed through the network 1000 times and adjust the weights in every iteration.
2626

27-
![Back Propagation](readme_assets/ANN-backpropagation-chain-calc-adjust.png)
27+
![Back propagation](readme_assets/ANN-backpropagation-chain-calc-adjust.png)
2828

2929
## Summary
30-
![Chapter 9 Summary](readme_assets/Ch9-Summary.png)
30+
![Chapter 9 summary](readme_assets/Ch9-Summary.png)

ch10-reinforcement_learning/README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,9 @@ Q-learning is an approach in reinforcement learning that uses the different stat
33

44
In RL with Q-learning, there is a concept of a reward table – it’s called a Q-table. This table consists of columns that represent the possible actions, and rows that represent the possible states in the environment. The point of the Q-table is to describe which actions are more favorable for the agent as it seeks a goal. These values that represent favorable actions are learned through simulating the possible actions in the environment and learning from the outcome and change in state.
55

6-
![PSO Velocity Update](readme_assets/RL-Intuition.png)
6+
![RL intuition](readme_assets/RL-Intuition.png)
7+
![Q-learning calculation](readme_assets/Q-learning-formula.png)
8+
![Q-table values](readme_assets/Calculate-Q-table-values.png)
79

810
## Summary
9-
![Chapter 10 Summary](readme_assets/Ch10-Summary.png)
11+
![Chapter 10 summary](readme_assets/Ch10-Summary.png)

0 commit comments

Comments
 (0)