An Introduction to Artificial Intelligence Assignment 9 Answers

                     

Q1. Which of the following is true about the MAP (Maximum a posteriori estimate) estimation learning framework?

a. It is equivalent to Maximum Likelihood learning with infinite data 
b. It is equivalent to Maximum Likelihood learning if P(θ) is independent of θ 
c. it can be used without having any prior knowledge about the parameters 
d. The performance of MAP is better with dense data compared to sparse data

Answer:- a, d

Q2. What facts are true about smoothing?

  • Smoothed estimates of probabilities fit the evidence better than un-smoothed estimates. 
  • The process of smoothing can be viewed as imposing a prior distribution over the set of parameters. 
  • Smoothing allows us to account for data which wasn’t seen in the evidence. 
  • Smoothing is a form of regularization which prevents overfitting in Bayesian networks.

Answer: a, c

Q3. Consider three boolean variables X, Y, and Z. Consider the following data:

There can be multiple Bayesian networks that can be used to model such a universe. Assume that we assume a Bayesian Network as shown below:

If the value of the parameter P(¬z|x,¬y) is m/n such that m and n have no common factors. Then, what is the value of m+n? Assume add-one smoothing.

Answer: 343.6

Q4. Consider the following Bayesian Network from which we wish to compute P(x|z) using rejection sampling:

Answer: 86.9

Q5. Assume that we toss a biased coin with heads probability p, 100 times. We get heads 66 times out of 100. If the Maximum Likelihood estimate of the parameter p is m/n where m and n don’t have common factors,
then the value of m+n is?

Answer: 77

Q6. Now, assume that we had a prior distribution over p as shown below:

Answer:- 6.5

Q7. Which of the following task(s) are not suited for a goal based agent?

Answer: b, c

Q8. Which of the following are true ?

  • Rejection sampling is very wasteful when the probability of getting the evidence in the samples is very low. 
  • We perform conditional probability weighting on the samples while doing Gibbs Sampling in MCMC algorithm since we have already fixed the evidence variables. 
  • We perform random walk while sampling variables in Likelihood Weighting, MCMC with Gibbs sampling, but not in Rejection sampling. 
  • Likelihood Weighting functions well if we have many evidence wars with some samples having nearly all the total weight

Answer: a

Q9. Consider the following Bayesian Network:

  • P(C|A,B,D,F,E) = α. P(C|A). P(C|B) 
  • P(C|A,B,D,F,E) = α. P(C|A,B) 
  • P(C|A,B,D,F,E) = α. P(C|A,B). P(D|C,E) 
  • P(C|A,B,D,F,E) = α. P(C|A,B,D,E)

Answer: b, c

Q10. Which of the following options are correct about the environment of Tic Tac Toe?

  • Fully observable 
  • Stochastic 
  • Continuous 
  • Static

Answer: a, c