SAFA Calculator v1.1

SAFA calculator v1.1 is software that can be used for better decision making in academic publication. In academic publication often a manuscript passes through a quality control system. Since the realization of evaluating academic contents, a group of experts usually are involved as evaluators in doing so. Currently an approach called "Instrument Based Assessment" or "IBA" is the most popular method in evaluating academic contents . Instrument based approach uses commonly a structured instrument which contains usually 8-12 items followed by a list of recommendations. IBA has several shortcomings; particularly Human factor bias is one of the most problematic elements in evaluating an academic content . . The likelihood of more than one reviewers agreeing on an article is only slightly better than chance. . An end-decision often sufferers from lack of accuracy due to this human process (human factor bias). These biases can be classified into three types - Evaluator's attribute bias, recommendation bias and end-decision bias . The reviewer's attribute bias occurs due to wrong selection of reviewers with required relevant expertise and experience. Often it is a difficult task to assign the right reviewer. However, irrespective of good or bad referee, taking into account of their experience and expertise as much as possible is a remedy to avoid this bias. Assigning a weight to these attributes may help to remedy the problem and get the author justice. The recommendation bias occurs due to poor quality of review work and poor judgment. Usually the review form has a few items on the several aspects of a manuscript. Reviewers attempts to score these items by following a scale and finally make a recommendation. Often there is inconsistency mathematically between the total median score of the items and the recommendations (recommendations must be weighted to be compared for consistency). End-decision bias occurs often due to the complexity and non-uniformity among the evaluators. More than reviewer may make mutually exclusive recommendation which will lead the editor to make the end-decision (final decision). An editor always tries to make the best decision but naturally it is not an easy process to make a decision out of different recommendations. This issue causes end-decision bias. The "Standardized Acceptance Factor Average (SAFA)" is an index can be estimated by using SAFA Calculator v1.1.


SAFA Calculator v1.1 is a unique quantitative tool for making a decision such as ‘acceptance’ or ‘rejection’ on an academic content. The "Standardized Acceptance Factor Average" is a mathematical framework that postulates the algorithm of the software, SAFA Calculator v1.1. Generally, a decision of ‘acceptance’ or ‘rejection’ on an academic content (manuscript) is made prior to print. SAFA Calculator v1.1 ( ISBN 978-983-44190-0-4) processes the opinions or remarks provided by the evaluators and produce an index with a cut off point to make an end-decision The estimation of the SAFA is solely dependent on a structured evaluation form which can be varied accordingly with the required adjustment by a publication authority. SAFA also can be useful to rank academic articles. Online use of SAFA can save substantial time of an evaluator and editor as well.


Making End-decision in academic publication

An end decision is made by an editor on whether a manuscript is to be accepted or rejected for possible publication. Generally an editor is the final authority to make a decision on publication though there are variations. Some publication authorities adopt a system which involves the editorial board in decision making. In such case the chief editor solicits the editorial board for final decision. Whatever the decision making process is at an editor’s level, the decision is made based on the expert’s opinions (reviewer). Expert’s opinions are gathered through evaluation process which can be different case by case. Thus, there is a close link between the ‘goodness’ of the evaluation process or system and an efficient end-decision. One way or another the editor is closely related to the decision making process. It may be useful to know about editor and his or her way of decision making. Wordweb (2008) defined editor as “A person responsible for the editorial aspects of publication; the person who determines the final content of a text” . According to Encarta (2008), an editor is a publishing supervisor who supervises the overall content of a manuscript. Collin's English Dictionary (1995) gives two core definitions of an editor: ‘A person who edits written material for publication, a person in overall charge of the editing……’ Editor’s universal task is editing and facilitating publication process, no mater which academic article they edit . Decision making is one of the most important responsibilities borne by an editor. Usually after receiving a manuscript from an author or a group of authors, an editor commences processing the submission and evaluates the suitability of the manuscript. If the manuscript is found suitable, the editor proceeds for evaluation and assigns a few evaluators. After receiving the evaluation report along with comments, the editor processes them to make final decision (end-decision) either by him or soliciting the editorial board. An editor synthesizes each evaluation report at his or her level best to make a fair decision. Since it is a human process, it is not easy to ignore the chance of having slightly biased decision. It happens not because of an editor’s intension but is a shortfall of the traditional decision making approach.


Ambiguity in Decision Making

In academic publication there is no standard decision making process. Commonly the process is postulated by an editorial board of which an editor is the leader. Thus, the decision differs and leads to an ‘acceptance’ or ‘rejection’ recommendation for the same manuscript simultaneously in two different journals. This is called ‘misery of recommendation’. An editor or editorial board’s decision is actually a synthesized outcome of the evaluation reports. Since the basis of a decision is an evaluation process, one must be aware of the shortfalls of an evaluation process. The decision-making process based on an evaluation is very confusing and human factor biased. Whatever the comments given by individual evaluator, the final decision depends on an editor’s or editorial boards’ view. In most cases the recommendations made by the evaluators are followed. A manuscript may have potential contents to be published but due to one evaluator’s view it may be rejected. That means traditional evaluation process or 'IBA'is not hundred percent accurate. Out of all kinds of academic publications, Journal publication warrants more care and attention than the others. Most of the potential submissions are processed by a journal to explore the suitability for possible publication. An evaluation process is the most common technique that facilitates the decision making process in journal publication. Often an end-decision is made by an editor based on two or three double blind evaluation reports. Seldom open evaluation method is used for making decision. The evaluation reports are often different from each other in terms of recommendation. An editor’s final decision based on this reports may be associated with human factor bias. It seems that at an evaluator’s stage a recommendation on a manuscript reflects individual opinion. Thus, there is less opportunity to argue on the comments except the experience and expertise of the evaluator. The experience and expertise of an evaluator is hard to measure. It is clearly understood that due to this all issues making a decision by an editor is critical to a great extent by following instrument based approach.

Quantitative Approach for Making End-decision

The traditional approach in decision making is solely dependent either on instruments or evaluators’ note on certain aspects of a manuscript. Till date this is the best way being utilized by publication authorities. This new approach in decision making is a combination of traditional Instrument Based Assessment (IBA) approach and mathematical tools. The approach is called the “Standardized Acceptance Factor Average (SAFA™)” that provides many conveniences in making an end-decision.

Background of SAFA™

Though there is a substantial improvement in evaluation process through out the last few decades, it is still debatable whether this is the best way to judge an article for possible publication. Undoubtedly, evaluating a manuscript before publication is established as the most popular way with several adjustments and improvements in the process. Even though, a very crucial question always arises by author or the relevant community how efficient a decision made by an evaluator and what are the basis for the end-decision made by an editor. A manuscript is a written script of scientific work with several aspects such as logic, consistencies and so on. Does an evaluator have always enough knowledge to focus on all these? Suppose, if a reviewer is eligible enough but how his or her opinion is to be measured and summarized on different aspects of a manuscript remains unanswered. However it is done, human factor becomes one of the most concerning issue in evaluation process. If it is possible to incorporate evaluator’s (reviewer) efficiency in making a decision may produce a better decision than just relying completely on their 'Yes' or 'No'. The vital part of an evaluation process is the evaluator's comments on the scholarly value of a manuscript. And editor makes end-decision based on more than one evaluation, often two evaluations. Though the end-decision is made based on evaluator’s opinion, the decision making process is not completely error free as seen in real life. In the first place, traditional decision making in publication is associated with the problem of heterogeneous opinion among the respective evaluators which is very natural. The presence of heterogeneity in an evaluation process is purely caused by human factor. Thus, resolving this issue in making an end-decision by an editor is as difficult as keeping the air dust-free. Bearing in mind the level of difficulty of resolving this issue, the level of bias may be minimized but may not be totally eliminated. This can be termed as ‘reviewer's attribute bias’. The approach used in traditional review process uses a standard review format which is named in the earlier discussion as 'Instrument Based Assessment’ approach' or 'IBA'. The IBA is the most popular approach till the moment for decision making in publication. The IBA tools, that is, the instruments used in IBA differ from one journal to another that addresses another issue of variability in quality. In IBA there is always a list of options on recommendations followed by the a number of items. The quantity of items ranges from 6-12. The objective of having a list of recommendations followed by the principal instrument is to converge the qualification of each item to a common recommendation such as 'accept' or 'reject' (often there are more options). Making a recommendation out of a few options based on the principal instrument leads to another bias. There are always inconsistencies between the recommendation by an evaluator and his or her item scoring. Mostly this bias is irremovable in traditional end-decision making approach. This problem is named 'inconsistent recommendation bias’. An end-decision is substantially directed by the recommendation in an evaluation report. This event again causes bias because, for instance, if there are three review reports and the recommendations are not identical, an editor has to make a decision which may not be completely accurate. Thus, there is chance of bias. I term this issue as 'end-decision bias'. In summary, the IBA is associated with major three biases - 'reviewers’ attribute bias', 'inconsistent recommendation bias', and 'end-decision bias'. These three biases cause less-efficiency in decision making in academic publication. In order to minimize the total bias caused by the above three issues the SAFA™ system has been proposed. Standardized acceptance factor average (SAFA ) The Standardized Acceptance Factor Average (SAFA ) is a mathematical framework to facilitate decision making towards acceptance or rejection of a submission for possible publication. Generally, such decision is made based on evaluator’s opinion but since all the evaluators are not at similar stand there may need an adjustment to the decision to be made. In order to estimate the SAFA, a standard double blind peer review process is used with the incorporation of evaluator’s experience and expertise. The SAFATM can be an option to eliminate the ‘misery of recommendation’. The estimation of the SAFA is solely dependent on a structured evaluation form which can be varied accordingly with the required adjustment by a publication authority. By reviewing 20 review reports submitted to the International Journal of Management and Entrepreneurship (ISSN 1823-3538) and the International Journal of Business and Management Research (1985-3599), it has seen that the review score (estimated by a mathematical equation using the averaging technique - summing up all the score points divided by the sum of possible highest score of each item) is inconsistent with the recommendation (‘accept’ or ‘reject’). For instance, while a review score is 0.73, ‘Accept with minor correction and re-review’ recommendation is made. In another case while the review score is 0.43, the same decision is made by another evaluator. This inconsistency causes less-efficiency to the process which may be corrected by adjusting the review score according to reviewer’s efficiency (experience and expertise) and applying averaging technique to minimize the bias.


The most difficult part of the SAFA is to measure reviewer’s efficiency. For the sake of the simplicity, editors have to follow a principal which is called ‘Two Factor Review Efficiency Measurement or 'TFREM’. The TFREM covers two major factors to be considered in an evaluation process namely (i) review experience which is measured according to the quantity of reviewed manuscripts and (ii) approximate peer-ness of a reviewer to the subject matter to be reviewed. This TFREM principal can be adjusted in the events where more than two factors are required to measure reviewer’s efficiency. The mathematical estimation procedure does allow considering many factors since it uses an averaging technique. After formulating correction factor, the review score can be estimated and weighted by the decision factor (weighted over 100 following the rule ‘Acceptance is assigned higher weight’. Each evaluation report produces a SAFA score; thus, in case of more than one evaluation for the same manuscript, an average of SAFA should be obtained to be used in making the end-decision. The entire calculation can be done with the help of a simple user’s friendly Microsoft Office Excel based application which is called SAFA™ Calculator v1.1.

Limitations

The SAFA has certain limitations like other scientific tools. The application has been developed based on Microsoft Office Excel which may not have a very professional look. The author of this application tried to avoid complications to make it more user’s friendly. As I mentioned in the earlier section that the SAFA is estimated based on TFREM principle, it may not minimize the bias due to reviewer’s attributes one hundred percent exactly. Increasing the number of correction factor can be a remedy to this problem. Changing TFREM principle for including more factors may be easy and more effective procedurally but difficult to apply practically.


Interpretation

A general decision rule of using the SAFA is that if a manuscript has a value of SAFA equal to or more than 0.5, the paper can be accepted for publication from the evaluation point of view. Further justification for acceptance or rejection relies on an editor’s view. If the SAFA ranges between 0.40 and 0.49, an article can be considered for further revision and possible publication. The SAFA can be used for categorizing articles according to the value. For example, if an article falls within a range of SAFA between 0.30 and 0.39 can be selected for proceedings. It is to be remembered that how and what standard will be maintained for accepting an article is completely an independent decision by the respective organization. The first issue of the International Journal of Business and Management Research (ISSN 1985-3599) has utilized the SAFA for ranking its articles. The SAFA is flexible (corporate version) in changing the cut off point and can be adjusted as per the requirement of a discipline .
 
< Prev   Next >