### NPTEL INTRODUCTION TO MACHINE LEARNING ASSIGNMENT 2

**NPTEL INTRODUCTION TO MACHINE LEARNING ASSIGNMENT WEEK 2**

Link to Enroll : Click Here

**1. The parameters obtained in linear regression**

a. can take any value in the real space

b. are strictly integers

c. always lie in the range [0,1]

d. can take only non-zero values

**Answer: d. can take only non-zero values**

**2. **Suppose that we have *N* independent variables (*X*1,*X*2,…*Xn*) and the dependent variable is *Y* . Now imagine that you are applying linear regression by fitting the best fit line using the least square error on this data. You found that the correlation coefficient for one of its variables (Say *X*1) with *Y* is -0.005.

a. Regressing *Y* on *X*1 mostly does not explain away *Y* .

b. Regressing *Y* on *X*1 explains away *Y* .

c. The given data is insufficient to determine if regressing *Y* on *X*1 explains away *Y* or not.

**Answer: c. The given data is insufficient to determine if regressing Y on X1 explains away Y or not.**

**3. **Consider the following five training examples

We want to learn a function *f*(*x*) of the form *f*(*x*)=*ax*+*b* which is parameterised by (*a*,*b*).Using mean squared error as the loss function, which of the following parameters would you use to model this function to get a solution with the minimum loss?

a. (4, 3)

b. (1, 4)

c. (4, 1)

d. (3, 4)

**Answer: d. (3, 4)**

**NPTEL INTRODUCTION TO MACHINE LEARNING ASSIGNMENT WEEK 2**

**4. **The relation between studying time (in hours) and grade on the final examination (0-100) in a random sample of students in the Introduction to Machine Learning Class was found to be: Grade = 30.5 + 15.2 (h)

How will a student’s grade be affected if she studies for four hours?

a. It will go down by 30.4 points.

b. It will go down by 30.4 points.

c. It will go up by 60.8 points.

d. The grade will remain unchanged.

e.It cannot be determined from the information given

**Answer: a. It will go down by 30.4 points.**

**5.** Which of the statements is/are True?

a. Ridge has sparsity constraint, and it will drive coefficients with low values to 0.

b. Lasso has a closed form solution for the optimization problem, but this is not the case for Ridge.

c. Ridge regression does not reduce the number of variables since it never leads a coefficient to zero but only minimizes it.

d. If there are two or more highly collinear variables, Lasso will select one of them randomly.

**Answer: c. Ridge regression does not reduce the number of variables since it never leads a coefficient to zero but only minimizes it.**

**6. **Consider the following statements:

Assertion(A): Orthogonalization is applied to the dimensions in linear regression.

Reason(R): Orthogonalization makes univariate regression possible in each orthogonal dimension separately to produce the coefficients.

a. Both A and R are true, and R is the correct explanation of A.

b. Both A and R are true, but R is not the correct explanation of A.

c. A is true, but R is false.

d. A is false, but R is true

e.Both A and R are false.

**Answer: d. A is false, but R is true**

**7. **Consider the following statements:

Statement A: In Forward stepwise selection, in each step, that variable is chosen which has the maximum correlation with the residual, then the residual is regressed on that variable, and it is added to the predictor.

Statement B: In Forward stagewise selection, the variables are added one by one to the previously selected variables to produce the best fit till then

a. Both the statements are True.

b. Statement A is True, and Statement B is False

c. Statement A if False and Statement B is True

d. Both the statements are False.

**Answer: a. Both the statements are True.**

**8. **The linear regression model *y*=*a*0+*a*1*x*1+*a*2*x*2+…+*apxp* is to be fitted to a set of N training data points having p attributes each. Let *X* be *N*×(*p*+1) vectors of input values (augmented by 1‘s), *Y* be *N*×1 vector of target values, and *θ* be (*p*+1)×1 vector of parameter values (*a*0,*a*1,*a*2,…,*ap*). If the sum squared error is minimized for obtaining the optimal regression model, which of the following equation holds?

**Answer:- b**

More From NPTEL : Click Here

* The material and content uploaded on this website are for general information and reference purposes only. Please do it by your own first.