The academic publishing is in crisis, but also at the time of great opportunities for the authorship and dissemination of knowledge. People nowdays rarely have enough time and reward for quality refereeing, editors work mainly for free for highly priced and often noncooperating commercial publishers, libraries have no money to follow the raising journal subscriptions and are often trapped in packet deals which made them long term dependent and buying unwanted journals; there is a lack of guarantee that the free services like arXiv will stay free and that the old knowledge will be preserved. The internet and cheap e-readers are now available opening new possibilities; publishers perceive them also as a danger because of easy infringation of author and publisher rights. Here we also discuss the academic evaluation practices, based on peer review and on the usage and measures of citation statistics, assessing creative originality and discovering plagiarism.
This community is interested in reforms for better publishing future, and developing technologies for the reforms. For more, see:
The initial version of this article (and some of the later additions) is compiled from Lab-related private wiki zoranskoda:citations.
While in court it is easier to win if somebody had a prior registration of copyright in a copyright office, in principle most of the copyright laws and patent laws in provable cases give advantage to the factual priority of the work, even if not registered. That is, every author’s work is a priori protected from the moment of creation; the registration at a copyright office just makes it easier to prove the priority in disputes.
According to some historians and anti-copyright activists, the copyright in the 19th and early 20th centuries mainly worked for the authors, while today it is structured in a way which protects mainly the publishers and less the authors. In particular, often the authors loose battles with their own publishers in attempts to make parts of their work free or published in a form which they prefer.
Citations and impact factors
- Joint Committee on Quantitative Assessment of Research Citation Statistics, A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS)
This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using “simple and objective” methods is increasingly prevalent today. The “simple and objective” methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded.
Arnold-Fowler also prompted
The publisher’s game:
Ecologists Carl and Ted Bergstrom have written some papers on game-theoretic aspects of publishing, which mathematicians might want to study:
and some of the other papers and information at http://octavia.zoology.washington.edu/publishing/.
Price and copyright aspects of academic publishing:
- Guardian: Academic publishers make Murdoch look like a socialist html
- John Baez, What we can do about science journals
- Joan S. Birman, Scientific publishing: A mathematician’s viewpoint, Notices AMS, pdf
- G. Kuperberg et al., Mathematical journals should be electronic and free(ly) accessible, Notices Amer. Math. Soc. 45 (1998), 845, pdf
- Richard Poynder and his blog on academic publishing – focusing on open access
- The Cost of Knowledge Researchers taking a stand against Elsevier; and the statement pdf
- Gowers’ weblog: elsevier-my-part-in-its-downfall and list of Elsevier’s journals
- Gowers: a more modest proposal – a system to cooperate on suggestions to improve preprints
- Donald Knuth‘s letter of resignation to Journal of Algorithms, Oct 2003
- Rob Kirby’s list of pricing 128 math journals 1995-1999
- Rob Kirby, Whither Journals?, pdf, Notices AMS__59__:9, 1272-1274, Oct 2012
- Princeton’s open access policy, url
- The crisis in scientific publishing links at Univ. of Maryland
- John Baez at Café: Journal Publishers Hire the “Pit Bull of PR” (2007)
- Heather Joseph, Executive Director, SPARC: prime time for public access
- MathOverflow publishing-journals-articles-without-transferring-copyright, how-to-select-a-journal
- Oleg Pikhurko’s experience: html
- Wikipedia Copyright, copyright law, intellectual property activism, anti-copyright, free and open source software, SOPA, Creative Commons, , Aaron Swartz
- public.resource.org, wikipedia
- Zoran Škoda: subtracted value added by the journals
- Huffingpost, Library.nu, Book Downloading Site, Targeted In Injunctions Requested By 17 Publishers
- Chris Lee, Open peer review by a selected-papers network (and accompanying Forum thread)
- Robert Rosebrugh, Experience with a free electronic journal (Theory and applications of categories), Notices of Amer. Math. Soc. (Jan 2013)
- Colin Rourke, The history of G&T
- Ilya Kapovich’s letter to the editor Notices AMS, Nov 2012 pdf worries that there are more errors in published papers than before
- Brian Osserman’s Opinion piece (pdf) recommends the 2-stage refereeing procedure, Notices AMS, Nov 2012
- 2012 letters from Elsevier to the math community and the free journal access page
- Democracy and Mathematics – an Café post by André Joyal; related Joyal’s comment at Math2.0
- Gowers announcing Episciences Project, blog from Jan 16, 2013
- Nature about Episciences Project: mathematicians-aim-to-take-publishers-out-of-publishing
Software initiatives for academic publishing
Plagiarism, bad science authors/editors etc.
The science of peer review in science
- Is it reliable? There is low agreement between reviewers (Daniel, Mittag, & Bornmann, 2007). There is higher agreement about which papers should be rejected than for which papers should be accepted.
- Should reviewers be blind to the identity of authors? An original study found that blinding reviewers to the identity of the authors improved the quality of the reviews in British Medical Journal (McNutt, Evans, Fletcher, & Fletcher, 1990). When this was expanded to other journals, there was no evidence that it improved the quality of the reviews (Justice, Cho, Winker, Berlin, & Rennie, 1998; van Rooyen, Godlee, Evans, Smith, & Black, 1998).
- Should authors and the public be blind to the identity of reviewers? A study found that informing authors of their reviewers’s identities (ie an open review process) did not change the quality of the reviews (van Rooyen, Godlee, Evans, Smith, & Black, 1999). In an unpublished study, they extended the study by opening up the review process further. They posted both identity information of reviewers and authors on their website, this had no effect on the quality of the reviews (discussed in Smith, 2006).
Related Azimuth Wiki pages