The readability of academic articles has generally been low and its impacts on articles’ citation performance have been measured in many fields, including marketing. In this paper, readability scores of 9,350 abstracts and 2,742 full texts from top marketing journals are analyzed and compared with their citation counts in Scopus, Google Scholar, and Web of Science, by using negative binomial multiple regression, correlation analyses, and ANOVAs.
In the literature review, the goal was to identify as many factors as possible that can be quantified automatically with programs such as Python from data exported from Scopus. Citations and ways to study them were discussed. Readability formulas, and their use in academia is outlined.
A way to analyze vast amounts of bibliometric data of articles, readability of their abstracts and full texts with Python is proposed.
In some journals, readability is correlated with citation counts, in others, it is not. Readability did not play a role in citations in the dataset, according to regression models. However, multiple other citation factors were identified: time, journal, self-citations, type of article, article length, references, title elements, and productivity. The results were mostly in line with previous findings by Stremersch et al. (2007) and other authors having studied citation factors.
Readability and its impact on academia are discussed. Authors should use as clear and concise language as possible, only using jargon when necessary.
Practical implications are given to scholars, while also discussing the limitations and suggestions for further research. The methodological choices led to the most limitations, but also avenues for further research.