Mar 19

This week we learned about uncertainty reasoning as the main topic.

More specifically we learned about the 3 types of machine learning. supervised, unsupervised, and reinforcement learning and their definitions

More detail was put into the definition of supervised and unsupervised learning. we then relearned about discrete probabilistic calculations. unions, etc.

lastly we learned about naive bayes calculations

Mar 9

We learned adversarial search functions which is the search functions that considers an opponent where they look for their own highest chance of winning. In particular we do minimax and the concept of pruning. minimax tries to maximize our chance of winning with the thought that the opponent would do their own optimal moves.the concept of pruning takes into account that some outcomes are not important because they are inferior hence we can cut them off to minimize processing time. IDS is with consideration to time restraints of some games. we are told to do some exercises to do an ab algorithm on a set tree with the pruning shown. The concept of zero sum game and non zero sum games are told in order to differentiate that not all game theory is zero sum.

The lights went out today hence it created a situation of delight in the middle. Yet this delayed our studies a bit.

Mar 2

Week 3 we studied about informed searches where the search functions are added with a heuristic value. we learned multiple ways to assign a heuristic value but the core concept behind is to relax the question to make it more coherent.

From this we learned algorithm such as the best first search or the greedy algorithm which looks based on only the heuristic value. A* which utilized the actual path is a more optimal way of solving the problems which takes into account actual path as well.

h(n) is only admissible if it is below h(n) otherwise it is false.

We also briefly looked at local search or a part of complete search function. The simplest way was hill climbing. by searching neighbors and checking the most optimal value from close results.

then we got confused about simulated annealing. however we started to understand it better at the end of the lesson. i began to understand it is based on probabilistic statistics and therefore may require further reading.

lastly, we studied about genetic algorithm where trial and error occurs until the best solution is found.