Answered You can hire a professional tutor to get the answer.
Hello, I am looking for someone to write an essay on Data Mining. It needs to be at least 1000 words.Download file to see previous pages... Back-Propagated Delta Rule Networks (BP) is an example for m
Hello, I am looking for someone to write an essay on Data Mining. It needs to be at least 1000 words.
Download file to see previous pages...Back-Propagated Delta Rule Networks (BP) is an example for multiple perceptron which contains additional hidden layers. It can function effectively compared to the single layer.
In the prediction process of neural networks to make accurate prediction the training cases are increased which eventually leads to overfitting (George N. Karystinos, 2000). This occurs when the number of input variables is large compared to the training cases or when the input layers are highly correlated with each other. In methods like kernel regression and smoothing splines, the under fitting and overfitting of neural networks is usually encountered. The overfitting occurs in more complex networks. This leads to unprecedented predictions or wild predictions.
Data cleansing is the process of removing inaccurate and inappropriate data records, which is an integral process of data processing and maintenance. In large data sets, the process of finding error and correcting the same needs interaction with the domain experts which is an expensive and time consuming process. Since it involves a comprehensive assignment of identifying and rectifying errors and hence the task is complex. Initially these operations are carried out manually and later computational means of data cleansing evolved and even this process are time consuming and error prone (Heiko Mller et al ).
3. What is the significance of Bayes Theorem in Data Mining Give an example of how statistical inference can be used for Data Mining.
Most of the presently available statistical models in data mining are prone to overfitting and also unstable (sensitive to minor changes in the data). These difficulties can be overcome in the Bayesian methods of statistical mining. The reliability of these algorithms has been reviewed (J. Kolter and M. Maloof, 2003). The Bayesian algorithm facilitates integration of clustering and produces scalable powerful algorithm apt for data mining. Capturing correlation of large number of variables is possible using the Bayesian method.
Example:
In the search process of similar sequences (gene or protein sequences) in a sequence database, the data mining algorithm works by searching for similar matches which is based on the statistical preferences (e- value). Lower the expected value higher the relationship between the query and the retrieved results. Since the data involved is a mere combination of string only statistical measures ensures comparative account of the data sets.
4. Explain the concept of a Maximum Likelihood Estimator with an example.
This is practically applied in prediction of phylogenetic relationships of protein sequences by tree algorithms. The maximum likelihood estimator forms the basis for the evolutionary prediction algorithms. The likelihood function predicts the relative function of all the given datasets (protein sequences).