Mar 19

This week we learned about uncertainty reasoning as the main topic.

More specifically we learned about the 3 types of machine learning. supervised, unsupervised, and reinforcement learning and their definitions

More detail was put into the definition of supervised and unsupervised learning. we then relearned about discrete probabilistic calculations. unions, etc.

lastly we learned about naive bayes calculations

Mar 9

We learned adversarial search functions which is the search functions that considers an opponent where they look for their own highest chance of winning. In particular we do minimax and the concept of pruning. minimax tries to maximize our chance of winning with the thought that the opponent would do their own optimal moves.the concept of pruning takes into account that some outcomes are not important because they are inferior hence we can cut them off to minimize processing time. IDS is with consideration to time restraints of some games. we are told to do some exercises to do an ab algorithm on a set tree with the pruning shown. The concept of zero sum game and non zero sum games are told in order to differentiate that not all game theory is zero sum.

The lights went out today hence it created a situation of delight in the middle. Yet this delayed our studies a bit.

Mar 2

Week 3 we studied about informed searches where the search functions are added with a heuristic value. we learned multiple ways to assign a heuristic value but the core concept behind is to relax the question to make it more coherent.

From this we learned algorithm such as the best first search or the greedy algorithm which looks based on only the heuristic value. A* which utilized the actual path is a more optimal way of solving the problems which takes into account actual path as well.

h(n) is only admissible if it is below h(n) otherwise it is false.

We also briefly looked at local search or a part of complete search function. The simplest way was hill climbing. by searching neighbors and checking the most optimal value from close results.

then we got confused about simulated annealing. however we started to understand it better at the end of the lesson. i began to understand it is based on probabilistic statistics and therefore may require further reading.

lastly, we studied about genetic algorithm where trial and error occurs until the best solution is found.

Feb 24

Uninformed search functions

This we studied more aspects of intelligent systems. in particular we studied the nuances about uninformed search. There are aspects to search that determine how we search for results. the 6 Attributed are.

  • States
  • Any state
  • Actions
  • Transition Model
  • Goal test
  • Path cost

State graph consists of vertices and edges

Each node has a State, Parent node ,action, path cost, depth. evaluation of search strategies are also studied in this lecture.

Lastly we studied multiple uninformed search strategies.

BFS, DFS,FLS, UCS and IDS

further information inside the notes.

Feb 17

The first topic was that about the introduction of AI, its definitions and slight part of it’s history. It was surprising to me that it was coined more than 60 years ago. we went further in depth regarding this introduction. this included the 4 components of the AI field which includes thinking rationally, thinking humanly, acting rationally and action humanly. also learned a it on how the turing test defines it. we also learned it’s foundations(pillars) such as philosophy, etc as well as the implementations today. last part of this introduction is the application domains.

The second topic was that about the Intelligent agent design and its definitions. which perceives its environments and acts rationally. We learned how to measure the environmental factors. as well as the agent types which included reflex and utility based agents among others. lastly we needed to create a group and discuss what we needed to do for the project.