# quiz代做 | mining代做 – Final Exam

### Final Exam

quiz代做 | mining代做 – 该题目是一个常规的mining的练习题目代写, 涵盖了mining等程序代做方面, 该题目是值得借鉴的quiz代写的题目

### Instructions

#### Attempt History

Attempt Time Score
LATEST Attempt 1 71 minutes 20 out of 30
^ Correct answers are hidden.
Score for this quiz: 20 out of 30
Submitted May 2 at 6:1This attempt took 71 minutes.1am

No collaboration 75 mins

Question 1 2 /^2 pts
Transaction IDItems Bought
1 {Milk, Beer, Diapers}
2 {Bread, Butter, Milk}
3 {Milk, Diapers, Cookies}
4 {Bread, Butter, Cookies}
5 {Beer, Cookies, Diapers}
6 {Milk, Diapers, Bread, Butter}
7 {Bread, Butter, Diapers}
8 {Beer, Diapers}
9 {Milk, Diapers, Bread, Butter}
10 {Beer, Cookies}

What is the maximum number of size 3-itemsets that Note that the itemsets derived may not be in the dataset.can be derived from this data set.

(^20) (^35) (^40) (^15) (^) None of the above. Question 2 2 /^2 pts Example of market basket transactions. Transaction IDItems Bought 1 {a, b, d, e} 2 {b, c, d} 3 {a, b, d, e} 4 {a, c, d, e} 5 {b, c, d, e} 6 {b, d, e} 7 {c, d} 8 {a, b, c} 9 {a, d, e} 10 {b, d} Compute the confidence for the association rules {a, e} -> {b}.

IncorrectIncorrect Question 3 0 / 2 pts

Which of the above statements are true for any A, B, and C?

(^) If A -> B then B -> A. (^) If A -> B and B -> C then A -> C. (^) If A -> C then A union B -> C. (^) If A union B -> C then A -> C. (^) I & II (^) I, II, & III (^) I, II, & IV (^) II & III (^) none Question 4 2 /^2 pts Example of market basket transactions. Transaction IDItems Bought 1 {a, b, d, e} 2 {b, c, d} 3 {a, b, d, e} 4 {a, c, d, e} 5 {b, c, d, e} 6 {b, d, e} 7 {c, d} 8 {a, b, c} 9 {a, d, e}

10 {b, d}
If the minimum support is set at 40%, how many frequent 3-itemsets will be found? Notethat if a 3 itemset is a subset of a larger itemset, it counts as one occurrence.
2

IncorrectIncorrect Question 5 0 / 2 pts

Apriori pruning based on support is a greedy strategy but is not optimal

(^) True (^) False Question 6 2 /^2 pts Review the table below. Select the collect purity set of the confusion matrix (^) Cluster #1: 0.98, Cluster #2: 0.53, Cluster #3: 0.49, Total: 0. (^) Cluster #1: 0.53, Cluster #2: 0.98, Cluster #3: 0.61, Total: 0. (^) Cluster #1: 0.98, Cluster #2: 0.49, Cluster #3: 0.53, Total: 0. (^) Cluster #1: 0.53, Cluster #2: 0.49, Cluster #3: 0.61, Total: 0. (^) None of above IncorrectIncorrect Question 7 0 / 2 pts Review the following table.

Compute the average silhouette score to evaluate K-mean clustering result with theparameters below.
K:
Initial centroids: centroid1 = (4,3) and centroid2 = (9,7)
Distance measure: Euclidean distance
0.

IncorrectIncorrect Question 8 0 / 2 pts

Review the following table.
Compute the updated centroids after the first iteration using K-mean clustering with theparameters below.
K:
Initial centroids: centroid1 = (4,3) and centroid2 = (9,7)
Distance measure: Euclidean
Type your answer in the same format as "centroid1 = (x1,y1) and centroid2 = (x2,y2)"
Each co-ordinate should be in the format A.BC, where A, B and C are integers. If you have4 as one of the co-ordinates you should write 4.00. Please be mindful of the spaces and

other formats.

centroid1 = (2.00,1.75) and centroid2 = (6.33,7.00)

Question 9 2 /^2 pts

Review the image below.

If you want to find the patterns represented by the nose, eyes, and mouth using k-Meansclustering, select all figures that be well-clustered?

(^) (a) (^) (b) (^) (c) (^) (d) (^) None of above Question 10 2 /^2 pts What is the purpose of cluster analysis? (^) To avoid finding patterns in noise (^) To compare clustering algorithms (^) To compare two sets of clusters (^) all of the above (^) None of the above

we do cluster validity to avoid clustering noise and find a suitable algorthim for ourdata

Question 11 2 /^2 pts

Review the table below.

Select the collect entropy set of the confusion matrix.

(^) Cluster #1: 1.84, Cluster #2: 0.2, Cluster #3: 1.7, Total: 1. (^) Cluster #1: 1.44, Cluster #2: 1.84, Cluster #3: 1.7, Total: 0. (^) Cluster #1: 0.2, Cluster #2: 1.7, Cluster #3: 1.84, Total: 1. (^) Cluster #1: 0.2, Cluster #2: 1.44, Cluster #3: 1.7, Total: 1. (^) None of above Cluster #1: 0.2, Cluster #2: 1.84, Cluster #3: 1.7, Total: 1. Question 12 2 /^2 pts Select qualities of clusters produced by a good clustering algorithm. (^) Intra-cluster distances are minimized (^) Inter-cluster distances are maximized (^) Number of clusters produced (^) A and B are correct good cluster have strong cohesion within the cluster and maximum distance betweenthem.

Question 13 2 /^2 pts
Review the table below.
Select all the support vector instances using Non-linear SVM

(^) i (^) ii (^) iii (^) iv (^) v (^) vi IncorrectIncorrect Question 14 0 / 2 pts Review the table below.

Select a correct hyper-plane equation of the data below using Non-linear SVM.

(^) Y= (^) Y=1. (^) Y= (^) Y=2. (^) None of above Question 15 2 /^2 pts Select the true statement for Nearest Neighbor classification Although Nearest Neighbor classification has the data dimensionality issue, it can be solved by^ the scaling technique k-NN classifier is a typical lazy learner because it spend so much time building the model even^ if the k is very small Drawing a circle is a mandatory task to find neighbor classes regardless of the number of K (^) Deter mining the optimal value of K is important for k-NN classifier performance. (^) None of above

• quiz Score: 20 out of

quiz代做 | mining代做 – Final Exam最先出现在学霸代写 – CS代写, 程序代写, CS作业代写, 代码代写, CS编程代写, java代写, python代写, c++/c代写, R代写, 算法作业代写, web代写, CS assignment代写, MATH代写, 统计代写, 金融代写, business代写, economic, accounting代写等。

Posted

in

by

Tags:

POST YOUR ASSIGNMENTS/HOMEWORK FOR FREE

X