Thirty-year survey of bibliometrics used in the research literature of pain: Analysis, evolution, and pitfalls

During the last decades, the emergence of Bibliometrics and the progress in Pain research have led to a proliferation of bibliometric studies on the medical and scientific literature of pain (B/P). This study charts the evolution of the B/P literature published during the last 30 years. Using various searching techniques, 189 B/P studies published from 1993 to August 2022 were collected for analysis—half were published since 2018. Most of the selected B/P publications use classic bibliometric analysis of Pain in toto, while some focus on specific types of Pain with Headache/Migraine, Low Back Pain, Chronic Pain, and Cancer Pain dominating. Each study is characterized by the origin (geographical, economical, institutional, …) and the medical/scientific context over a specified time span to provide a detailed landscape of the Pain research literature. Some B/P studies have been developed to pinpoint difficulties in appropriately identifying the Pain literature or to highlight some general publishing pitfalls. Having observed that most of the recent B/P studies have integrated newly emergent software visualization tools (SVTs), we found an increase of anomalies and suggest that readers exercise caution when interpreting results in the B/P literature details.


Introduction
During the last decades, research on pain has made massive progress resulting in an explosion of scientific and medical publications.
This increase in publications was accompanied by a continuous restructuring not only of the intellectual input, but also of the dissemination output of research on pain. Some illustrative examples include: • the recent emergence of journals dedicated either to pain in general (e.g. Although of lesser importance quantitatively, another factor explaining the increase in the number of publications is the explosion in the use of bibliometrics in specific literatures (e.g., pain) to design a set of quantitative methods for the analysis of scientific publications (1). This publication growth can be illustrated in a search of the PubMed database using the MeSH (Medical Subject Headings) term, Bibliometrics, which retrieved nearly 13,000 publications distributed over 30 years in three 10year periods: 623 publications from 1991 to 2000; 3,677 from 2001 to 2010; and 8,641 from 2011 to 2020. Additionally, more recent bibliometric papers in the biomedical field range over numerous topics such as cancer (2,3), radiology (4,5), and coronavirus (6,7). In this context, the objective of the utilization of bibliometrics varies from charting the growth and development of a research field (8); to evaluate the progress of a researcher (9), an institution (10), countries (11), or a journal (12); or to provide statistics to support science decisions, research policies and collaborative research initiatives (13). Looking at the numerous bibliometric studies, what may appear as a duplication of bibliometric studies on the same subject can be a source of confusion or misunderstanding if one is not fully aware of the different parameters used in each study such as: the time span analyzed, the database(s) used, the criteria for inclusion of papers or, whether a general or specific aspect of a subject is targeted, etc. Readers may negate or misinterpret the results of the bibliometric approach, perhaps due to a lack of understanding of the subject of bibliometrics leading to a lack of confidence in the reliability of the study.
Considering these obstacles and given the increasing number of bibliometric studies on pain, our objective is to present a bibliometric study on existing pain papers using various bibliometric techniques to highlight some general points, both in the methodologies used and the objectives pursued. Based on an analysis of these papers, our aim is to provide the reader with some information and understanding to enable a better appreciation of the content of these studies, and thus be better prepared for the reading and analysis of future bibliometric studies on pain and other related literatures. In our approach, papers are sorted and analyzed according to their goals: either to provide a general, detailed, or specific topic description of the pain literature (or specific pain subtopics), or to investigate (or highlight) some characteristics of the publication process, not just applicable to the bibliometrics of the pain literature, but generally to the bibliometrics of all scientific and medical subject literatures.

Methods
The process developed from March 2021 to August 2022 is as follows: • A PubMed search was conducted with one of the following keywords "bibliometrics, scientometrics, informetrics" associated with one of the following pain-related keywords "pain, nociception, analgesia, headache, migraine, cephalalgia". Each term was truncated appropriately to retrieve variant word forms. These keywords, (considered as the basis of the pain scientific terminology) were searched in the titles, abstracts, or keywords of documents. The resulting documents (mostly papers or articles), with no language restriction, were further scrutinized by the two authors and retained if their contents are in line with the stated objective.
• The same procedure was performed on the Web of Science (WoS), Scopus, ProQuest, and Google Scholar. • The reference list of each paper was analyzed to capture possibly missed papers, and their citations (obtained using Google Scholar) were scrutinized to capture any additional missed papers. • Finally, a "random" search was conducted on general search engines such as Google, Bing, Qwant using additional pain and bibliometric terms such as "trigeminal neuralgia", "low back pain", "literature analysis", "quantitative scientific literature". Papers not retrieved earlier were added to our dataset.
This method can appear as an "unusual/unorthodox" way of searching, but we think it is well-adapted to our topic relating to the bibliometric analysis of the literature related to pain (B/P for short) for which using only the main descriptors "Bibliometrics" and "Pain" in a typical search strategy would not be adequate to capture the targeted literature and/or would attract too many irrelevant papers. Additionally, this strategy will, most likely, miss very few papers. In order to structure the analysis, each paper was classified in one of three categories: (A) General Purpose-where the aim of the paper is to provide an overview of a general or a specific topic (e.g., historical approach, specific type of pain) of the pain literature; (B) Non-Specific-where papers highlight the publishing characteristics or pitfalls of the pain literature using bibliometric analysis; and (C) Miscellaneous-where papers could not be assigned to one of the previous categories.

Results and discussion
From our various search strategies, 189 publications on B/P were deemed relevant and formed our dataset for analysis.
The earliest paper was published in 1993 and none appeared until 1999 onwards. The first two 5-year periods (1998-2002 and 2003-2007) were relatively stable with 7 and 8 papers, respectively; however, since then the numbers of B/P papers exploded: 19 papers in 2008-2012, 34 in 2013-2017, 115 papers from 2018 to August 2022, and three papers are "under review" or "in press".
The first authors of the 189 papers were from 30 countries, distributed in descending order of productivity: China (69 papers), the USA (35 papers), Brazil (13 papers), Canada and Spain each with 8 papers; South Korea (7 papers), Turkey (6 papers), France, India, and Italy each with 5 papers; Croatia (4 papers); and 19 countries with 3 or fewer B/P papers ( Table 1).
When the main topic of each paper is considered, papers providing a general and classical bibliometric analysis of the pain literature (Group A) represented three-quarters (143 papers, 75.7%) of all the papers; those aimed at highlighting pitfalls in the publishing process (Group B) constituted 16.9% or 32 of the papers analyzed; and the rest in Group C (14 papers, 7.4%) are miscellaneous papers with objectives other than those in Groups A or B. Some general information is presented for each topic and for each publication (in chronological order) in Tables 2, 3-Group  A, Table 4-Group B, and Table 5-Group C.

Datasets
The datasets are comprised of traditional and well-established institutional databases; holdings or collections of specialized libraries; one or several appropriate scientific journals; compilations of Congress or Conference Abstracts; two or more databases with duplicate documents removed; and sometimes a mixture of any (or all) of the above.
As expected, the Web of Science (around 47% of all B/P studies), PubMed (30%), and Scopus (7%) were generally the most used databases, singly or in some combination (Tables 2-5). Other databases (often country-based) were also used, but to a lesser extent.
Supplementing the three major databases, some studies have integrated their own national or specialized databases: for example, SinoMed (Chinese), Cochrane Library (collection of medical/healthcare databases). The diversity of the databases used reflect the comprehensive approaches developed to search and retrieved publications on various aspects of pain. For example, PsycINFO was used to identify methods of pain assessment (i.e., measures, scales, inventories, tests) in the research literature (170, 174); BDENF, a Brazilian nursing and thematic database which is part of the Latin American and Caribbean Health Sciences Information System, was used to identify papers related to the diagnosis of pain by nurses (28).
Most of the papers in our study of the B/P literature used only one database; however, several studies used two or more: Added to these few studies of single-journal datasets, five investigations used a group of journals: an earlier study to compare the Impact Factors (IFs) of pediatric anesthesia and pain articles from four anesthesia journals (159); a second study to provide a comprehensive list of the top-100 classic citations in the specialty of pain research from 11 pain-specific and 22 anesthetic-related journals (40); a third study to compare the IF with the Altmetric IF (quantitative and qualitative measures, complementary and/or supplementary to traditional citationbased metrics) of 18 perioperative, critical care and pain medicine journals (200); a fourth study to evaluate the IF bias of  Ref.
Year of publication Ref.
Year of publication  Ref.
Year of publication Authors Journal/Source of publication clinical trials published in nine pain journals (181); and a fifth study using ten pain or anesthesiology journals to illustrate the presence of "spin" in the abstracts and articles of RCTs (183). Additionally, Congress or Conference Abstracts have been used as datasets for several investigations (172,175,180), and a list of 98 academic pain medicine fellowship programs-compiled from the American Medical Association Fellowship and Residency Electronic Interactive Database Access-was used to examine the influence of research productivity to attain professorships among members of the chronic pain medicine faculty (86).
Whatever the subject of interest may be, the choice of datasets for subjects such as our B/P analysis relies on a combination of the following: desired level of scientific reliability, scope, and pertinency of the dataset; accessibility of the dataset-for free or for a fee; ease with which the retrieved documents can be processed by, for example, data visualization software; and familiarity of the dataset to the researchers.

Document selection
The dataset from which documents (e.g., articles, abstracts, reports) for bibliometric analysis is obtained and in which enough attention is paid to terms and phrases describing pain is key to determining the quality of the selection and the accuracy of the results. The selection process is often greatly simplified when investigators choose publishing outlets that only contain "pain documents": pain-focused journals, proceedings of congress on pain, list of publications of researchers working in a laboratory/medical center/institution dedicated to pain research. However, this is often not the case, and the strategy then becomes how to extract a set of "pain" documents from a dataset that contains both "pain" and "non-pain" documents. Depending on the aim(s) of the studies, investigators need to define a set of pain keywords, key phrases, and/or criteria, which will most likely retrieve the desired papers. The field of basic and clinical pain research is characterized by multiple terms, often by an overlapping of similar pain phenomena for different pain concepts, and by the frequent evolution of pain terminology leading to the difficulty of selecting an appropriate set of documents. A great variation exists in the choice of pain terms: the pain literature is analyzed without any specificity; a large set of pain terms are used; pain synonyms/analogues/related terms (e.g., nociception, analgesia, neuralgia) are employed (30,34,35,47,123); the number of pain terms used may exceed 20 (35, 37, 150) and can be up to 30 (54, 79), even though a few studies only use the single word "pain" (80, 100, 112, 140, 143). Alternatively, when a specific pain phenomenon is targeted (e.g., fibromyalgia, headache, or low back pain), the choice of painterms is reduced to either one (19, 25,108,126,131,133,137,144) or very few (<5) keywords (29,68,69,73,96,155,185).
If the selection of pain terms plays a major role in the retrieval process, and if the papers give detailed and replicable descriptions of the procedures used, then the results are presumed to be accurate. However, studies often lack precision in the selection process: for example, in the study of Dela Vega and colleagues in 2021 to assess headache research impact and productivity among 11 SEA (Southeast Asian) countries, a "systematic search" was performed that included one or more of four pain-related terms or phrases (primary headache, migraine, trigeminal autonomic cephalalgia, and tension-type headache) whenever at least one author is from a SEA country. However, immediately following the search strategy is the ambiguous sentence: "Equivalent terms for "migraine", "tension-type headache," and "trigeminal autonomic cephalalgia" were also inputted in the search string" (96). No "equivalent" terms were stated; hence, readers are left to guess what additional terms or phrases were included. In other studies, the fields in which the pain keywords are searched (Title, Title/Abstract, Title/Abstract/Keywords) are not indicated (58, 72,74,85,106,185). This omission can have a dramatic effect on the number of documents retrieved and consequently on the analysis. A quick search on PubMed for 2001-2021 inclusive, retrieved 153,365 documents with the term "pain" in only the Title field; one using the Title and Abstract fields more than tripled the dataset to 545,272 documents.
A further difficulty in identifying the pain-related literature is linked to the intrinsic complexity of pain itself. Firstly, the history of the syndrome causalgia was initially published by Mitchell and colleagues in 1864 (203). Since then, many synonyms (algodystrophy, algoneurodystrophy, Sudeck's atrophy, Reflex Sympathetic Dystrophy, Complex Regional Pain Syndrome) were used to refer to this syndrome (163,204,205), and finally, the phrase Complex Regional Pain Syndrome (CRPS) was proposed by the International Association on Pain in 1994. Hence, a consequence for a bibliometric study is the diffusion of publications focusing on the same phenomenon but appearing with different names; this can lead to errors in estimating the number of publications. A quick search on PubMed revealed that from 1971 to 2000, several synonyms (causalgia, algodystrophy, algoneurodystrophy, Reflex Sympathetic Dystrophy, Sudeck's atrophy) were used; these were rapidly replaced from 2000 onwards by the term Complex Regional Pain Syndrome (Figure 1). These observations agree with two previous studies (157,163). Several other pain terminologies were also modified: "carpalgia" has replaced "pain in the wrist" (206); the term "trigeminal neuralgia" has supplanted "tic douloureux"; several recent papers favor the term "Persistent Spinal Pain Syndrome" for CPSS (Chronic Pain after Spinal Surgery) or FBSS (Failed Back Surgery Syndrome) (207-210); and after centuries of using dozens of terms such as "muscular rheumatism", "muscle calluses", "chronic rheumatic myitis", "fibrositis", "muscular hardening" (211), the term "fibromyalgia" emerged in the mid-1970s (212). Its recognition as a syndrome occurred several years later (213), and the first criteria for the classification of FMS (Fibromyalgia syndrome) in a well-designed, blinded study were published by the American College of Rheumatology in 1990 (214).
Secondly, the emergence of new pain terms, due mainly to continuing research and discovery, has added further complications in assembling pain-related literatures for bibliometric studies. An example is the term "nociplastic pain" proposed by Kosek and colleagues in 2016 (215) to describe a "mechanistic descriptor for chronic pain states not characterized by obvious activation of nociceptors or neuropathy … but commonly experienced by people worldwide" (216). There is also the concept of mixed pain referring to patients who have a substantial overlap of nociceptive and neuropathic pain symptoms in the same body area (217). Continued interest in mixed pain has resulted in the recognition of the term by the IASP/International Association for the Study of Pain (218). Another example is the growing utilization of the term "localized neuropathic pain" that concerns approximately 60% of neuropathic pain patients (219) and is supported by the fact that pain localization is one of the hallmarks when determining the choice of first-line treatment in patients with neuropathic pain (220).
Additionally, the following observations are indicative of a more global, complex, and evolving landscape of the pain literature: From these observations, some background knowledge of the field of pain research (e.g., history, terminology) is necessary to engage in a bibliometric investigation and to produce a high-quality study.

Publication environment
Of the 189 B/P papers for which the sources of publication were available and retained in this study were published in 122 different journals (Tables 2-5).
By far, the Journal of Pain Research with 29 B/P papers is the most productive; journals with five or fewer B/P papers include: Pain with five B/P papers, Anesthesia and Analgesia, Pain Medicine, and Pain Research and Management have four B/P papers each, five journals have three B/P papers, 14 journals published two B/P papers, and 100 journals each have one B/P paper. As expected, a large number of papers (67 or 35.4%) in our 189 B/P studies were published in pain-focused journals either with an international audience such as Pain  Evolution of the number of complex regional pain syndrome publications in PubMed from 1971 to 2020. The seven keywords/keyphrases were searched in the Title/Abstract fields and shown in five 10-year periods (1971)(1972)(1973)(1974)(1975)(1976)(1977)(1978)(1979)(1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)(1989)(1990), …), each period totaling 100% of publications. Among the 122 journals publishing B/P papers, the Journal of Pain Research is notable: firstly, it has 29 B/P papers while all the other journals contain five or fewer B/P papers (see above); secondly, 23 of the 29 B/P papers were published in the most recent years (2020)(2021)(2022); and thirdly, 22 (75.9%) papers were authored by Chinese researchers-over twice the rate (36.5%) as the overall number (69 of 189-see Table 1) of Chinese B/P papers. Additionally, in 2021-2022, the Journal of Pain Research published three B/P papers on the same topic "migraine/ acupuncture" (123,124,133). Finally, discussions below in the sub-section "Anomalies …" pertain to many B/P papers in the Journal of Pain Research and offer strong incentives for reflection on the possible overlap of future B/P studies. At the document-type level within journals, most of the B/P papers are "Articles" (Article, Original paper, Original research, Research article) or "Reviews" (Review, Comprehensive review, Topical review, Mini review). However, several B/P papers are presented under other labels such as "Meta-analysis" (60, 93, 167), "Short communications" (26), "Letters" (14), "Virtual project" (195), "Correspondence" (192), or "Editorial" (24).
This diversity in the publishing format is not surprising and illustrate the variety of approaches followed to investigate the pain literature through a bibliometric prism. It can be viewed as a positive contributing factor to heighten the visibility of pain research among the scientific and medical community.

Journal impact factor (JIF) or impact factor (IF)
Within the following B/P papers, the JIF is employed as a bibliometric index; three sets are discussed, each showing how the JIF is used: (1) The first set includes bibliometric studies in which the IF appears as a "journal-level bibliometric index": generally, the authors provide tables in their Results section that include listings of journals containing "pain papers" in decreasing frequency order accompanied by the JIF of each journal (30,  31 (3) The third set includes a few miscellaneous uses of the IF index with various aims, for example: a. To evaluate the number and type of Croatian publications in the field of pain research, and to compare it with an identical dataset by researchers from Graz, Austria, as they have similar scientific productivity (37); b. to highlight the necessity of developing and increasing pain research in Africa (51); c. to compare the impact on the scientific literature (using JIF) with some social media index such as Altmetric scores (alternative metrics complementary to citation-based metrics, see: www.scienceeditorium.com/blog/journal-impact-factorversus-altmetrics/) (191,199,200); d. to suggest the use of Altmetric analysis as an alternative to the JIF (91); e. to study the publications issued from abstracts presented at the 2010 World Congress on Pain (172); f. to analyze whether there is an association of the JIFs publishing low back pain systematic reviews with journals endorsed by recommendations of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and the reviews methodological quality (177); and g. to see if a difference exists in the citation of retracted articles of a pain researcher between high-impact and low-impact journals (188). In summary, the criticisms that have accompanied the Impact Factor for decades-mainly directed at its misused and/or misinterpretation (231, 232)-the presence and influence of the IF in bibliometric papers, including those in pain-related fields, will most likely continue. Nevertheless, it remains the responsibility of the scientific and medical communities, both authors and readers, to use and interpret the information provided by this index appropriately.

Citation analysis
During the last decades, author, or paper/article citation counts (as measures of impact or influence) have rapidly been considered one of the main metrics in any bibliometric toolkit.
Among the various B/P studies, a dozen or so were mainly aimed at investigating the pain literature through citation analysis as indicated in their titles ("Top-cited articles in …" or "Most frequently cited papers in …"). Two studies considered all papers in the pain-related literature (16,47); others were restricted to literatures of specific pain such as fibromyalgia (76,91), neuropathic pain (116), back pain (58, 61), postoperative hyperalgesia (122), headache (68, 120) trigeminal pain (72); and one study combined the literatures of pain and depression (84).
Although most of the citation studies were similarly arranged; that is, by the distribution of the top-cited papers over journals, countries and/or institutions, some studies were limited to brief descriptions (16,58,61,72,76,84), while others were more comprehensive in either discussing their findings (47,105,133,137,141,200), integrating their results with Altmetric analysis (91), or using existing methods such as PageRank and HITS (Hyperlink-induced topic search, see http://pi.math.cornell.edu/ ∼mec/Winter2009/RalucaRemus/Lecture4/lecture4.html) to augment citation analysis in the PubMed database (184).
However, though the studies mentioned above carry information about the pain literature, we should remind ourselves just what citation analysis comprises: • individual citation for which a large array of heterogeneous motivation may influence the choice of researchers in citing one paper rather than another paper (233); • limitation in choosing parameters which may bias the results, such as: datasets, search terms, document-type(s), time period. The influence of the search criteria is illustrated in a short comparison made between two studies of the "top 100" articles in radio graphics-using different databases with different journals indexed and different time selection criterion-produced an overlap of only 70%; that is, each study identified 30 frequently cited articles unique to the search criteria used (234); • just one of the many bibliometric indexes available and provides only a partial and non-qualitative view of a literature landscape (235).
In summary, citation analysis is a powerful investigative tool; however, the conclusions reached from studies relying only on this metric must be scrutinized. Readers should always remind themselves that the integration of citation analysis within a wider bibliometric approach is needed to avoid any mis-, under-or over-interpretation.

Gender of first authors
Not surprisingly, the gender theme of authors of the pain literature has either been the focus or largely integrated in several bibliometric studies. At the end of the 20th century, Strassel et al. performed a citation analysis of contributors to the pain and analgesia literature and observed that "Few women were first authors of any most frequently cited paper" (16). Two decades later, in a brief commentary, an impressive increase of the percentage of female authors in pain-related publications from 26.4% in 2005 to 40.5% in 2015 was noted by Szilagyi and Bornemann-Cimenti (171). This trend corroborates the results of two literature studies: in neurology by Nguyen et al. (236) and in neuroscience by Dworkin et al. (237). In the first study, the authors noted the increase of female authorship in journals classified in the MeSH Journal Category "Pain" from 7.6%  to 35.4% in 2002-2020 (236). Although this positive trend is certainly encouraging and heading towards parity, a recent study revealed a highly skewed gender disparity in the publications of the pain research community with a prevalence of 70.6% male first authors and 81.6% male senior authors (176).
Their results, however, should be viewed with caution since: • only 20 of the highest cited papers, each of seven journals affiliated with the seven leading societies in the academic pain community, were analyzed; • only papers authored by persons in the USA formed their dataset; • only the principal (first and last) authors were considered; and finally, • only a 5-year (2014-2018) span was considered.
Nonetheless, a general trend toward gender authorship parity is evident, even if obstacles remain. Following the evolution of gender disparity in pain literature and comparing it with those in closely related disciplines such as neurology or neurosciences-or in more general medical and scientific fields-is not only interesting, but necessary to uncover "systemic deficits" that may be ameliorated with a "cultural and macroscopic organizationaldriven change" (176). Table 1 shows China with 69 first-authored publications followed by the USA with 35; these two have over one-half (55.0%) with the other 85 publications distributed over 28 countries in decreasing number of publications. Conspicuously absent are first-authored papers from the UK, Germany, Japan, and the Netherlands-countries which are consistently among the topmost productive countries in pain-related research (35, 40) and well-versed in bibliometric research (238-240).

Country affiliations of first authors
While some researchers have published several bibliometric papers on pain in general and on specific aspects of pain using similar approaches (23,35,39,49,69), the large majority of first authors have published only one or two B/P papers. So, it appears to us that rather than arising from an institutional decision, the choice of using bibliometrics for studying the pain literature can mainly be interpreted as a desire of researchers made possible by colleagues with bibliometric knowledge and thereby increase the publication of bibliometric studies in nonbibliometric journals.
Furthermore, the domination of China over the USA firstauthored papers in our study parallels a similar trend in the PubMed database over all biomedical publications: a search in March 2022 using only two parameters-China or USA in the "Affiliation" field and Bibliometrics in the "Title/Abstract" fieldresulted in 454 documents for China and 268 for the USA.

B/P studies using classical analysis
Each of the B/P papers that developed a general or classical analysis of the pain literature (Tables 2, 3: Group A) can be characterized through four main parameters: (1) The first parameter is related to the type of pain considered.
We note that in most of the studies (50 of 143), pain is considered without any restriction on the origin or the nature of pain (33,35,73,85). The B/P papers that focus on specific types of pain are distributed as followed: studies on low back pain/back pain (n = 15), headache/migraine (n = 14), chronic pain (n = 12), fibromyalgia or neuropathic pain, each with 10 papers, cancer pain (n = 9), postoperative pain (n = 8), and a few on other specific types of pain such as trigeminal neuralgia (153), labor pain (90), postoperative pain (98), or psychological pain (94). (2) The second parameter indicates the targeted population. The general population is targeted for most B/P investigations (25,30,76); however, several studies have restricted their population to patients with cancer (100, 126), diabetes (130), some critical illness (65), or depression (102).  42,55,186). As numerous improvements have been made during the last decade to provide more efficient and adaptable animal models of pain (241), the continuation of studies considering the ethical problems inherent in pain research (242) and bibliometric studies should proceed: firstly to quantify the evolution of the scientific community regarding the management of pain in animals, and then to globally enhance the quality and reliability of experimental research (55); and secondly to quantify the basic research on pain according to prescribed animal models.

Publishing pitfalls in B/P papers
Papers aimed at attracting the attention of readers to potential publishing pitfalls in science and medicine, including B/P studies, are shown in Table 4 (Group B).
Of particular interest in academia are preliminary or ongoing studies given orally or presented as poster papers at Congresses and their subsequent extension (or not) as articles in peerreviewed journals. Two studies concern the abstracts presented at the 13th World Congress on Pain sponsored by the International Association for the Study of Pain (IASP) in 2010 at Montreal, Quebec, Canada: one revealed that the overall "publication rate" (from Congress abstracts to full-text published papers) was 27.5% with variations among countries ranging from 12% for Switzerland to 38% for China (172); the second showed that just 52% of the abstracts dealing with Randomized Controlled Trials (RCTs) were later published as full-text papers (175).
Another study of papers presented during the 9th Brazilian Congress on Pain held in 2010 at Fortaleza, Ceará, Brazil showed that only 8.9% appeared later as full-text papers (243). Perhaps the higher publication rate for Brazil (22%) of papers given at the IASP Congress at Montreal also held in 2010 might confirm the importance to authors for international visibility (172). Explanations generally given for "non-publication" include lack of time to prepare the manuscript for publication, lack of support from co-author(s), and time consumed by other ongoing studies (162). Additionally, there can be a lack of confidence in the quality and/or study design of the paper as well as the discovery of existing published papers with similar results: the former perhaps resulting from questions and discussions following either the presentation of papers or the viewing of poster papers. It is worth noting that the post-Congress period considered for eventual journal publications was 2 years for De Oliveira (162), 6 years for Saric et al. (175) and 7 years for Akkoc (172); therefore, comparing the results of the three studies is somewhat limited.
Another area of concern is the agreement between the content of the Congress abstract and its subsequent published full-text paper. An analysis of abstracts presented at various World Congresses of Pain revealed that a non-trivial percentage of the "abstract-publication pairs" have discordances: 31% for RCT abstracts; a high of 79% for abstracts reporting preliminary results (175); and 40% for systematic review abstracts (180). Such large discrepancies between abstract-publication pairs were noted in a recent editorial by Puljak and Saric positing if researchers should trust (and therefore cite or reference) abstracts from pain conferences; their studies have shown that "conference abstracts presenting the highest levels of evidence at the largest global pain congress are not necessarily dependable" (244).
In a similar vein, two recent bibliometric studies pinpointed discrepancies between the content of Congress abstracts and that of their full-text manuscripts. The studies concern low back pain papers obtained from the Physiotherapy Evidence Database (PEDro): one focusing on clinical trials (177) and the other on systematic reviews (178). The authors concluded that Congress abstracts of clinical trials and systematic reviews on low back pain were incomplete, show evidence of spin (overemphasis of beneficial effects), and were inconsistent with their full-text equivalents. Additionally, upon further scrutiny of published systematic reviews, three out of four were found to have "critically low" methodological quality (177, 182).
Finally, it is worth noting that despite the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 Checklist (245) and its precursor guideline published in 1999 (246), the content of many low back pain systematic reviews and/or meta-analysis needs to be read with caution (167, 177, 180).
Occasionally investigated is the presence (or absence) of impact factor bias of papers published in pain journals. When testing the hypothesis that "studies with positive results are more likely to be published in journals with greater impact factor when compared with articles with negative or inconclusive findings" on RCTs published in pain journals, Mukhdomi et al. found no "impact factor bias in the pain literature across many journals over many years" despite differences in parameters, such as: origin of the data or size of the sample (181); presence of stated hypothesis or sponsorship funding. In contrast, a decade earlier De Oliveira found "publication bias in the anesthesiology literature especially in higher clinical trial impact factor journals" (162).
Even if the results of Mukhdomi et al. (181) are interpreted in the context of their limitations (a short time span, 2012-2018, and only nine pain journals considered), it is an encouraging sign for researchers and clinicians to submit "negative or inconclusive findings" to high impact journals. It can also be viewed as a positive stimulation for developing similar studies by broadening some criteria, such as: including non-pain-focused journals, extending the period of investigation, or including document types other than RCTs.
Four other papers that concern methodological approaches used in pain research complete this section: Another publishing pitfall in the pain literature used a casecontrolled approach and provided a 10-year follow-up of citations to retracted papers authored by Scott S. Reuben, an anaesthesiologist and pain researcher; Reuben was convicted of data fabrication and 25 of his papers were finally retracted-the highest number of retractions to date (188). The study showed that "invented or falsified data" continues to be cited over a decade later; this in turn leads to distortion (most likely) of results obtained by researchers of citing papers. The magnitude of Reuben's scientific fraud has been likened to the financial scandal of Madoff (http://www.elmundo.es/elmundosalud/2009/ 03/20/dolor/1237574917.html). Although the study highlights only one researcher, it illustrates that the field of pain research is not immune to fraudulence and inappropriate scientific behavior. With the development of specific databases to report retractions of scientific papers such as Retraction Watch (https:// retractionwatch.com/), we encourage interested researchers to use bibliometrics to investigate the field of "retracted pain publications" as it could be used to track the spread of fraudulent papers in the pain literature; such an investigation was done for oncology (256). This would help pain researchers to remain vigilant in their analysis of the literature and to be aware of suspect articles.  (Table 4 Group B), a small number of papers with specific or unique aims have been published recently ( Table 5 Group C). These 14 papers are characterized by the diversity of their: origins (from seven countries); publication format, e.g., Letter (200), Correspondence (192), Original Article (197)(198)(199), Review (196), or Virtual Project (195); topics-the analysis of pain literature through an Altmetric prism (190,191,199,200), the development of a new bibliometric indicator to predict the success of an analgesic (189), the quantification of topics of existing pain research subject areas using natural language processing (198), the assessment of the evolution of the representation of pain in the brain for more than four decades using literature mining tools (194), etc. These studies provide evidence of the efficiency of bibliometrics for deciphering the pain literature and contributing to the knowledge of pain and its diffusion beyond the world of classical scientific publishing.
3.11. Evolution of bibliometrics in the B/P literature In our study, the first paper quantifying the pain research literature was a two-page Letter published in 1993 which highlighted the distribution (by age groups) of pediatric pain publications in the 1980s. In this Letter-to-the-Editor, data are presented in a conventional table (14). Then, for about 20 years, after a few editorials highlighted the importance of conducting bibliometric investigations of the pain literature (257, 258), B/P researchers used standard "automated productivity applications" (e.g., Microsoft 365, formerly Office, including Word, Excel, and PowerPoint) and displayed data derived from their B/P analysis using, for example, Microsoft Office to create tables, charts, graphs, world/country maps, etc. In these papers, results are sorted and classified, in such a way as to provide the reader a simple but easily interpretable picture of the results and of the message proposed by the authors (see most of the papers in Tables 2, 3 published from 1999 to 2016). During the last decades, along with the growing importance of bibliometrics and advances in digital technologies, software packages dedicated to improving bibliometric investigation and data visualization have emerged (259); these tools have been used extensively in the analysis of pain-related literature. In 2016-2021, over one-half (56.1%) of the 73 B/P studies providing general bibliometric analysis ( Tables 2, 3) used a software visualization tool (SVT). Among the different software visualization tools available, CiteSpace was used most often (24 papers), followed by VOSviewer (10 papers), HistCite (two papers); BibExcel, Bibliometrix, BICOMB, Publish or Perish, and Word Cloud (each with one paper). Of these 41 papers, four used two SVTs, and one used three SVTs. Detailed descriptions and reviews of the most relevant software visualization tools are available in several papers (1, 259, 260).

Anomalies in the B/P research literature
Throughout our analysis of the B/P literature, we found irregularities that occurred frequently enough to deserve the attention of readers: (1) The date range of papers analyzed (e.g., 2010-2020) are often fuzzy (or incomplete) rather than inclusive; that is, retrieval took place before the closing date; see for example (59,67,89,99,101,103,108,128,134,144,149). (2) In several studies, the retrieval date is approximate and exceeds the date range specified knowing that a delay of several weeks (months or longer) is needed to update the databases used such as PubMed, Web of Science; see for example (80,85,131,133,135,152,153,155). Although this anomaly may not have serious consequences on the overall message of a study, such imprecision may introduce bias in the quantitative evaluation and consequently in its interpretation. (3) Some papers using world map visualization tools in which a color scale is used to match each country with the number of pain publications indicated, produced some discrepancies between the numbers in the map and the numbers in the text; see for example (79,82,103,111,113,114). Additionally, in the world maps of several studies, some countries are overlooked and hence, not displayed (82, 103, 116). (4) In some studies, the data displayed in figures are illegible; see for example (79, 82, 88, 101, 111, 112, 114, 115, 130, 138-140, 149-151, 153). (5) Several studies displayed figures with redundant information; for example, the number of papers, the number of citations, and the number of citations by papers; see for example (78, 79, 82, 88, 102, 111, 113-115, 119, 130, 138, 149). (6) In many papers, the reader has to become familiar with specific indexes used in graph theory such as "centrality, closeness, betweenness, silhouette, strength, log-likelihood ratio": these terms may be defined (124); at other times, only superficially introduced-see for example (67,80,88,89,92,100,101,103,105,108,112,130,137); and sometimes not defined at all (94,97,117,123,134,135,(138)(139)(140)(141)150). (7) In many studies where figures presented are from software visualization tools composed of graphs with nodes representing a variable (e.g., country, author, institution) interconnected to other nodes according to intensity or proximity, the reader is confronted with: • a surfeit of figures-see for example (78, 79, 82, 88, 89, 93, 98, 101-103, 108-115, 118, 123-125, 130-132, 134, 135, 137-142, 149-151, 153); • a surfeit of figures with a melange of colors for nodes, links, labels which makes the message hard to interpret and understand (93,105,131,139); • links without nodes or labels (67, 79, 80, 82, 88, 89, 94, 102, 103, 105, 111-115, 117, 118, 130, 132, 134, 138, 140, 142, 149, 150); and • links with nodes but without labels or vice versa (67, 78-80, 82, 88, 89, 100, 102, 103, 105, 108, 111-115, 117, 118, 125, 130-132, 134, 135, 139-142, 148-150, 153). Along with the recurring anomalies listed above, other observations of non-standard scientific practice include, inter alia: lack of agreement between information displayed in figures and tables (e.g., 112); inadequate or omission of key search parameters, such as the topic, in the Methods section (e.g., 58); absence of citations in the Discussion section (e.g., 152); cited references in the text or tables either omitted or incorrectly listed in the Reference section (e.g., 67,97,118,141); non-conforming in-text citation practice (e.g., 119); countries often misnamed (e.g., England for the UK), appear twice (e.g., Germany) or mistaken as states (e.g., TX, NJ, CA for the USA). The last observation may be due to the combining of two or more databases (data sources) with varying granularities in a "field" designated as "country".
Finally, we would encourage readers to be vigilant when reading past, present, or future bibliometric studies of a subject literature such as "pain". We would also like to encourage authors and editors to take extra care in writing and editing papers before publication; this practice will certainly lighten the burden on readers and (perhaps) increase the profile of such publications and their authors through higher citation counts in databases as well as higher Altmetric scores.

Conclusion
This study presents a large view of the numerous bibliometric investigations on the pain literature developed during the last 30 years. Most of these studies provide general descriptions of the pain literature with filters adapted to the objectives of each study: selecting a type of pain; focusing on a geographical (world, continent, country) population with or without any specific health-related status; or highlighting pain therapeutics. Other B/P studies are dedicated to reveal or analyze publishing pitfalls existing in the pain literature, and a few papers include some miscellaneous applications. Since the number of B/P papers has dramatically increased in the last years, providing useful information for the pain medical and scientific community, it is recommended that readers be cautious when reading and interpreting the results of B/P papers.

Author contributions
CR initiated the project, collected the bibliography, and wrote the manuscript. CW collected the bibliography and wrote the manuscript. Both authors approved the final version of the manuscript.