The San Francisco Declaration: An Old Debate from the Latin American Context

In the past few weeks, new criticisms of the impact factor have been made by research communities. What is surprising in this case is that these criticisms come from scholars in misnamed hard sciences, and that they are trying to present a set of complaints that are not new, neither in their meaning...

Full description

Autores:
Tipo de recurso:
article
Fecha de publicación:
2013
Institución:
Pontificia Universidad Javeriana
Repositorio:
Repositorio Universidad Javeriana
Idioma:
spa
OAI Identifier:
oai:repository.javeriana.edu.co:10554/33369
Acceso en línea:
http://revistas.javeriana.edu.co/index.php/revPsycho/article/view/5998
http://hdl.handle.net/10554/33369
Palabra clave:
null
null
Rights
openAccess
License
Atribución-NoComercial-SinDerivadas 4.0 Internacional
Description
Summary:In the past few weeks, new criticisms of the impact factor have been made by research communities. What is surprising in this case is that these criticisms come from scholars in misnamed hard sciences, and that they are trying to present a set of complaints that are not new, neither in their meaning nor in their statement. We have long known that a quantitative indicator such as the Impact Factor is not only insufficient, but also very vulnerable, since the citations-to-articles ratio can create situations in which, for example, a journal with few articles but a good, controlled quantity of citations generated by groups interested in getting a higher indicator for the journal, actually has that effect, an increase that I call a “bubble” effect. However, the creators of these indicators have already undertaken measures designed to prevent these actions. These measures range from warning editors to creating more indicators and more diverse forms of measuring impact. Scientometric indicators have also been long shown to be a measure of communication between academic peers, and not a measure of social appropriation of knowledge, or of the impact of professional training. It is probably necessary to have different measures for those contexts. But it has been academic communities, especially those from hard sciences and from countries with higher outputs, like the ones rediscovering the aforementioned facts, that have played a role in legitimizing these indicators as a criterion of quality. The problem is not the indicators per se. On the contrary, it is academic communities, operating from universities or other entities, who have become the problem by having given indicators a significant weight both in research assessment and resource allocation processes. But at least in our context, it is clear that the final decision on whether a researcher gets resources does not depend on the IF, but on a complex peer reviewing system that makes the weight of indicators more relative. But also, we cannot ignore the role that these indicators have been given by incentive systems within Universities and institutions, both for researchers and for research groups. These indicators cannot reflect the whole spectrum of their efforts or the dynamics of knowledge-producing communities in their early stages of development. Nevertheless, we have enough evidence nowadays that once communities are consolidated, these indicators can provide information about certified quality. That is why most of these measures provide several informative dimensions and are useful because they give transparency to the processes that account for research activity. If we did not have these indicators, we would not have any other way to understand this activity. Moreover, open-access systems such as REDALYC and SciELO, distinguished projects committed to promote open-access, have been facing the challenges of improving access for communities that simply do not have enough money to pay for it. These initiatives have been fighting for quality and democratisation of access to knowledge – REDALYC in particular has also suggested alternative indicators for regional academic communities. Up to this point, these voices had been ignored, and it is only now that they are heard again, that mainstream scholars raise their own to discuss something we had been debating for years in our region. Furthermore, globally Scimago group has had an outstanding job and complementary for measurement of isolated indicators developing multiple and complex measures of production, impact and use of knowledge. I think we also need to ask ourselves what forces and interests are behind this discussion today. Maybe emergent communities are unleashing these declarations calling for a change in the rules? Is this rediscovery important now that these emergent academic communities have a voice and citations? In this sense, these rebellions must be taken with a pinch of salt, because we have been reflecting about the subject for years and this is not a discovery for us. We have discussed the need for other measures and, of course, we have had clarity about the importance of the social impact of knowledge. Wilson López López Editor