Next Article in Journal
2D Sonic Acoustic Barrier Composed of Multiple-Row Cylindrical Scatterers: Analysis with 1:10 Scaled Wooden Models
Next Article in Special Issue
Estimating Muscle Power of the Lower Limbs through the 5-Sit-to-Stand Test: A Comparison of Field vs. Laboratory Method
Previous Article in Journal
A Multiview Representation Learning Framework for Large-Scale Urban Road Networks
Previous Article in Special Issue
Molecular Basis of Irisin Regulating the Effects of Exercise on Insulin Resistance
 
 
Article
Peer-Review Record

Multiobject Optimization of National Football League Drafts: Comparison of Teams and Experts

Appl. Sci. 2022, 12(13), 6303; https://doi.org/10.3390/app12136303
by Attila Gere 1,*, Dorina Szakál 1 and Károly Héberger 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2022, 12(13), 6303; https://doi.org/10.3390/app12136303
Submission received: 22 April 2022 / Revised: 8 June 2022 / Accepted: 15 June 2022 / Published: 21 June 2022
(This article belongs to the Special Issue New Trends in Sport and Exercise Medicine II)

Round 1

Reviewer 1 Report

 

The sum of ranking differences may not be a new idea.

1 the calculation process should be presented in detail. I can not see any innovation in your research method.

2 consensus degree is an important topic in MCDM. you may introduce more reference and  press forward on deeper consensus  model design.

3 disscussion  section should be rewrited. you may compare other methods with your method.

Author Response

Reviewer #1

The sum of ranking differences may not be a new idea.

Response: Well, sum of ranking differences (SRD) is a relatively new idea; it was introduced in 2010-2013 [1-3]. Its applicability and usability are far from being exhausted. The present paper is a good example of a novel application. If new algorithms were accepted exclusively, the new methods could not be proliferated because their applications are to be rejected.

 

1 the calculation process should be presented in detail. I cannot see any innovation in your research method.

Response: SRD was never applied to analyze NFL drafts; the experts can be evaluated, grouped, and compared easily; even a prediction is possible. We have shown that the coding option can be tailored according to the experts’ feelings/decisions. A usual multicriteria decision making (MCDM) tool has one direction only i.e., the objectives can be compared based on ranking by criteria (objects, samples). Herewith, we utilized both directions, the team ranking defines the ordering of experts, and the experts ranking defines the team ranking (transpose problem). Finally, we could make statistically correct comparisons and decisions using analysis of variance (ANOVA).

 

2 consensus degree is an important topic in MCDM. you may introduce more reference and press forward on deeper consensus model design.

Response: Yes, indeed. Consensus modeling is the up-to-date approach to minimize the modeling errors. We admit, our description had not been exhaustive in this respect though we have mentioned Lourenço and Lebensztajn’s work [4]. The latter source has unambiguously proven that SRD corresponds to the consensus of eight MCDM technique: Desirability, Quality Loss Function, Weighted Tchebycheff, MultiObjective Optimization on the basis of Ratio Analysis (MOORA), Kim and Lin’s Criterion, Multiattribute Rating Technique using swings (SMARTS), Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE II), Fuzzy Decision Method. There is no reason to doubt the findings of these authors.

 

3a disscussion section should be rewrited.

Response: The discussion part was completed with several references [5-8], as follows:

The technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) was applied as MCDM tool, with normalization and weighting. The weights of the criteria were calculated by an eight-step-process using fuzzy analytical hierarchy process (AHP). The TOPSIS, contrary to his name, does not rank the objectives to the similarity of ideal solution as SRD does, but searches the shortest geometric distance from the positive ideal solution (PIS) and the longest geometric distance from the negative ideal solution (NIS) [5]. The same triangular fuzzy weighting was used in an AHP for determination of the criteria of transport planners when establishing a set of park and ride system facilities [6].

Three aspects were considered pertinence, importance, and unambiguity [7] in developing an assessment of 11 play actions, decision-making, technical execution, and final result are coded according to the observed adequacy. A panel of experts validated the instrument (camera).

A computer model was developed to calculate the probability that a pass reaches the target player without being intercepted by opponents in soccer. A comparison to expert ratings was fulfilled as a part of the model without using any MCDM methods [8].

 

3b you may compare other methods with your method.

Response: Indeed, such comparisons are routinely done and lacks the novelty. For example, Lourenço and Lebensztajn compared SRD with other MCDM methods exhaustively [4]. We have studied SRD couplings with factor analysis, sparse principal component analysis (sPCA) and SRD-based improvements of Promethee-GAIA multicriteria decision support technique [9]. We would like to emphasize that most of the MCDM tools uses weights according to the preferences of the decision maker. That is any results can be created just with “properly” adjusted weights. SRD is free from such subjectivism.

Author Response File: Author Response.pdf

Reviewer 2 Report

Please see the attachment

Comments for author File: Comments.docx

Author Response

Reviewer#2

 

Manuscript Title: Multi-object optimization of National Football League drafts. Comparison of teams and experts.

 

Highlighted Findings: A coded solution to a novel Multi-Criteria Decision-Making Method called Sum of Ranking Differences (SRD) is presented. The paper is well-written, and the concept is explained in a comprehensive manner. Also, the method is compared quite extensively with the methods already presented in literature.

Response: We appreciate your positive evaluation. Thank you!

 

Also, the future directions of the study following the expert opinions on the draft throughout the season would be great to take a look at in order to get a proper grasp on the authenticity of the method with respect to change in variables in a time series setting.

Response: A time series evaluation is, in fact, valuable. To obtain statistically meaningful results seven-ten-year series is necessary as a minimum. We plan to complete such an analysis, but a completion is outside of our will.

 

Some of the minor things that I found that need to be addressed are:

  1. There are some typos and grammatical errors in the Results and Discussion Section of the paper that need to be corrected. (i.e., It is interesting to see the comparison of experts Fig 3) (Line 246))

Response: Corrected.

 

  1. I would be great if you would highlight the limitations of the presented method.

Response: Corrected.

 

  1. Can Fuzzy inputs be used instead of discrete inputs? How would that effect the results?

Response: Fuzzy input can similarly be ranked as non-fuzzy inputs. The above-mentioned problem remains: as the membership function changes, the results can be adjusted according to the subjectivism of the decision makers. SRD is free from such influence.

 

 

 

  1. It would be great if you’d add some more information about consensus design.

Response: We are afraid we do not fully understand your point here. A search of “consensus design” in Scopus (access date May 18 / 2022) gives the message “No documents were found”. Consensus modeling is an everyday practice in QSAR/QSPR and related fields. Maybe the reviewer thinks on mixture models? It is suitable to find the optimal, domain-specific QSAR model. This approach just started to be proliferated; we have found one nice paper [10]. Nevertheless, we refrain its citation, as it has nothing to do either with sport or with MCDM.

 

5a.       Was the ranking done prior or post-drafting process?

Response: We are working with existing data. The SRD involves a ranking process and a ranking of experts (and teams also) created as a result. The data set lists the draft evaluation of the 32 NFL teams given by 18 experts. Therefore, it has been created after the draft process.

 

5b.       Due to the subjectivity of the opinions and the probability of a draft going in the favor of the expert at any point in the season, how would this ranking justify that the ranking of the experts done by the method presented is good or bad?

Response: It is a very good point! In fact, not only the opinions are biased and delivered to random errors but the future performance of individuals and teams, as well. Fortunately, the biases and random fluctuations can be handled by statistical approaches properly. Justification and validation are a difficult task, especially if no good reference (golden standard) exists. We are working with random errors, and it is impossible to predict an outcome of a roll of a dice just the probabilities can be predicted. However, accredited statistical tools will provide us the most probable solution according to the maximum likelihood principle.

Author Response File: Author Response.pdf

Back to TopTop