Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.

QUESTION

ITS-836 Course Paper, a total of 25 points (25% of the total course points) Izzat Alsmadi Guidelines\Rubrics to deliver Course Paper Instructions using details from (https://www.kaggle.com/WinningMode

ITS-836 Course Paper, a total of 25 points (25% of the total course points)

Izzat Alsmadi

Guidelines\Rubrics to deliver Course Paper

Instructions using details from (https://www.kaggle.com/WinningModelDocumentationGuidelines)

· The dataset in this project must be the one you selected for the course through Blackboard/Discussion board. Any other dataset will case your whole project not to be graded

· Submit the final paper, no later than Oct. 10th

· Make sure you include the major 7 sections graded mentioned below in your paper

· The deliverable should contain the following components:

(1) Overall Goals/Research Hypothesis (10 %)

1-3 research questions to navigate/direct all your project.

· You may delay this section until (1) you study all previous work and (2) you do some analysis and understand the dataset/project

(2) (Previous/Related Contributions) (15 %)

As most of the selected projects use public datasets, no doubt there are different attempts/projects to analyze those datasets. 30 % of this deliverable is in your overall assessment of previous data analysis efforts. This effort should include:

· Evaluating existing source codes that they have (e.g. in Kernels and discussion sections) or any other refence. Make sure you try those codes and show their results

· In addition to the code, summarize most relevant literature or efforts to analyze the same dataset you have picked. 

· For the few who picked their own datasets, you are still expecting to do your literature survey in this section on what is most relevant to your data/idea/area and summarize those most relevant contributions.

(3) A comparison study (15 %)

Compare results in your own work/project with results from previous or other contributions (data and analysis comparison not literature review)

The difference between section 3 and section 2 is that section 2 focuses on code/data analysis found in sources such as Kaggle, github, etc. while section 3 focuses on research papers that not necessary studied the same dataset, but the same focus area

(4) Preprocessing activities, Features Selection / Engineering (10 %)

(See this link for content of the next section)

https://www.kaggle.com/WinningModelDocumentationGuidelines

· What were the most important features?

· We suggest you provide:

· a variable importance plot (an example here about halfway down the page), showing the 10-20 most important features and

· partial plots for the 3-5 most important features

· If this is not possible, you should provide a list of the most important features.

· How did you select features?

· Did you make any important feature transformations?

· Did you find any interesting interactions between features?

· Did you use external data? (if permitted)

(5) Training Method(s) 10 %

· What training methods did you use?

· Did you ensemble the models?

· If you did ensemble, how did you weight the different models?

A6. Interesting findings

· What was the most important trick you used?

· What do you think set you apart from others in the competition?

· Did you find any interesting relationships in the data that don't fit in the sections above?

Many customers are happy to trade off model performance for simplicity. With this in mind:

· Is there a subset of features that would get 90-95% of your final performance? Which features? *

· What model that was most important? *

· What would the simplified model score?

· * Try and restrict your simple model to fewer than 10 features and one training method.

 (6) Accuracy metrics reporting, charts, Model Execution Time (10 %)

Many customers care about how long the winning models take to train and generate predictions:

· How long does it take to train your model?

· How long does it take to generate predictions using your model?

· How long does it take to train the simplified model (referenced in section A6)?

· How long does it take to generate predictions from the simplified model?

(7) Use of ensemble methods (15 %)

Per the last chapter we have, make sure you employ at least two different ensemble models in your code and show the model details and results

References 

Citations to references, websites, blog posts, and external sources of information where appropriate.

Summary

Summarize the most important aspects of your model and analysis, such as:

The training method(s) you used (Convolutional Neural Network, XGBoost)

The most important features

The tool(s) you used

How long it takes to train your model

------------------------------------------------

----------------------------------------------------------------

Quality Criteria (10-20% of overall project):

1. Thorough performance analysis: Results in data analysis can be misleading. Without detail analysis of different performance metrics (e.g. accuracy, recall, ROC, AUC, etc.) one-side view of results can present incomplete and inaccurate findings. Presenting a thorough analysis for overall performance of your models will show that you did not ignore any factor in your model. 

2. Following standard project templates: You can find through the Internet several standard templates for data science projects (How to structure your code, data, etc.). While following standard templates is not a must or required but will be considered as part of quality criteria. Here are examples of code templates for different programming environments:

a. R and RStudio: 

http://projecttemplate.net/getting_started.html

https://nicercode.github.io/blog/2013-04-05-projects/

https://community.rstudio.com/t/data-science-project-template-for-r/3230/10

b.  Python:

https://towardsdatascience.com/manage-your-data-science-project-structure-in-early-stage-95f91d4d0600

https://drivendata.github.io/cookiecutter-data-science/#example

https://github.com/equinor/data-science-template

c. MS Azure

https://github.com/Azure/Azure-TDSP-ProjectTemplate

https://buckwoody.wordpress.com/2017/08/17/a-data-science-microsoft-project-template-you-can-use-in-your-solutions/

https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/team-data-science-process-project-templates

3. Better documentation

Save the data + code that generated the output, rather than the output itself. Intermediate files are okay as long as there is clear documentation of how they were created

4. Use Version Control

e.g. using some websites such as Gitlab, GitHub / BitBucket

4. Document and keep track of your analysis environment: If you work on a complex project involving many tools / datasets, the software and computing environment can be critical for reproducing your analysis Computer architecture: CPU (Intel, AMD, ARM), GPUs, Operating system: Windows, Mac OS, Linux / Unix Software toolchain: Compilers, interpreters, command shell, programming languages (C, Perl, Python, etc.), database backends, data analysis software Supporting software / infrastructure: Libraries, R packages, dependencies External dependencies: Web sites, data repositories, remote databases, software repositories

Show more
LEARN MORE EFFECTIVELY AND GET BETTER GRADES!
Ask a Question