|
Post by sociol on Sept 21, 2013 21:00:14 GMT -5
What do people think about this new journal's idea? Sounds like a good plan. Any thoughts/experiences/expectations/etc.? www.sociologicalscience.com/
|
|
|
Post by Thoughts on Sept 23, 2013 6:52:49 GMT -5
What do people think about this new journal's idea? Sounds like a good plan. Any thoughts/experiences/expectations/etc.? www.sociologicalscience.com/Certainly seems like a bold experiment, and the top-tier journals (i.e., AJS/ASR/SF) seem to have gotten out of hand (probably because for folks like us on the job market spamming those three with work is part of our entry ritual). Looking at the journal's web page, I see a lot of stuff that sounds nice initially, but provokes a LOT of questions when you think about it longer. (1) They're claiming that replication studies will get equal weight with new research. Great, but I'm puzzled by how exactly to "replicate" some paradigmatic works in recent sociology. Sure, you can directly replicate, say, a social psychological study (or at least SHOULD be able to). But I'm puzzled by how you are supposed to replicate, say, any of the recent ASA distinguished publication winners. (2) "Sociological Science is a general interest sociology journal, for specialists. Scientific advance occurs primarily through dialogue between specialists. Submissions to Sociological Science shall therefore be judged by the extent to which they contribute constructively to that dialogue, and not by their appeal or accessibility to a general audience." At first, my thought was (probably as they intended) "finally! I can just directly say what my contribution is without that ponderous section having to explain The Literature to the uninitiated like you always have to wade through in AJS!" But then, that premise (scientific advance is produced through specialist dialogue) is, at best, debatable, and certainly not always born out through the history of science. I am also quite uncertain how this squares with the open dialog that is supposed to happen once articles are published on the web. (3) Standards of theory and evidence. SocSci "encourages but does not require" formalization of theoretical arguments, and "Direct quotes should be used only when a respondent’s statement has been recorded verbatim." Initial thought: "be specific and accurate? FINALLY!" But then: "Wait, there's plenty of theory and ethnography I've read and respect where this kind of formalization is not possible! What about, say, trying to formalize Bourdieu or write an ethnography from a dangerous fieldsite without any direct quotations, no matter how quickly afterwards I took notes?" (4) Evaluative, not developmental. Good that this move will likely bring down time-to-print, less good that people will get less feedback. (From people I've talked to involved with the project, this is in part borne of frustration with having to come up with something constructive to say about bad AJS/ASR submissions that are obvious rejections.) Tentative conclusions: the journal seems like an interesting experiment, replicating things happening in other social-science fields (PLOS One, for example.) It's format sounds closest to a peer-reviewed blog, and there are some really impressive names on the masthead. Judging by what they say they do, and also by the amount of ink spilled on certain topics on the website, it seems probably that the journal is going to tilt towards nifty quant, "Big Data" (whatever that actually turns out to be), "data driven theorizing" (whatever that is), experimental work, and formal theory. Certainly mostly good things, but "a general interest journal (for specialists!)"? That does not seem likely. Finally, it seems like there's an underlying tension between specialists and generalists (and the public) at the heart of this. Great that the papers are going to be open access. (I'm a huge fan of that.) But if there's so much speed, brevity, and emphasis on getting the work out there "for debate" in public even though it's essentially meant to be interpretable only by specialists, who's supposed to be debating, exactly? Just imagine your pet controversial finding from the social scientists over the last few years and (if SocSci becomes successful) multiply that.
|
|
|
Post by tough follow up on Sept 23, 2013 7:50:54 GMT -5
My own thoughts aren't nearly as well put-together as the previous poster (great points, by the way), but I'll just add that I am curious/excited to see who and what is published there.
Right now, it's really open, and on orgtheory it's been suggested that maybe SocSci will become a leading journal, following only ASR/AJS in prestige(!). I think that is a bit optimistic, but I also think their early pieces can set the tone for the kind of work published there. Is it really good work by senior faculty who want to get it out instead of letting it sit in R&R hell at a traditional journal (I mention senior faculty because presumably they have less need for publishing in the established journals because they already have tenure)?
If that's the case, then great, and I think having a journal that publishes good research and gives decisions so quickly could be really useful. On the other hand, if they have a reputation for giving quick decisions but there are questions about the quality of what's published there, then I think it becomes "just another journal."
|
|
|
Post by Monopoly Markets on Sept 23, 2013 12:02:34 GMT -5
Even if you never plan to submit to SocSci you should hope for its success, and even hope that it rivals AJS/ASR. The underlying reason for increasing review times, increasing use of multiple R&Rs with new reviewers, editors not saying "Do THIS and we'll accept the paper," crazy page limitations (Social Forces, now 8,000 words, counting references--so if you cite some paper with a long title it counts against you), and basically complete disrespect to the authors who supply the content (and then turning around "inviting" those same authors to provide reviews, which, somehow, can now be trusted to be based on the author knowing something while during the review the author was treated as knowing nothing) is this: ALL the soc journals have the same publication model and thus, by definition, they have a monopoly from which there is (or has been so far) no escape.
Their monopoly power has turned abusive. The only hope is for viable alternatives to challenge them.
Currently AJS/ASR can dangle hope out there for 6, 9, 12, 15, 18, 21, 24 months and more. Each day the poor author(s) wait, each day they hope for a positive resolution, or, eventually, ANY resolution so they can send the paper elsewhere if need be. Send it elsewhere, and encounter the same ridiculous process. Through it all, AJS/ASR sit on important work, and even consider asking for yet ANOTHER anonymous person to weigh-in, safe in the knowledge that the authors have no real recourse.
The existence of a successful SocSci can upend this dynamic. If famous people submit their work to SocSci, it makes waves, and SocSci becomes successful, AJS/ASR will either have to modify their treatment of authors or simply find themselves no longer able to have their pick of the "best" work. And, other journals will tend to follow suit, because then the craziness of the current model will become outlier behavior instead of the alleged signal of the "rigor" of the process it currently is.
|
|
|
Post by isitjustme on Sept 23, 2013 13:09:47 GMT -5
Open access = costs for publishing. So, an Assistant Professor who publishes a 6,000 word article will pay something like $230. This is on top of the submission fee of $35 ($10 until the end of the year). I know these kinds of rates are standard among open access STEM journals, but honestly I'm not so sure I'm willing to pay a couple of hundred dollars just to get an article out. At this point, I would rather wait (a realistic amount of time) for the peer-review process.
|
|
|
Post by Junior faculty on Sept 23, 2013 13:58:22 GMT -5
I really hope it succeeds for all the reasons mentioned above, and that we see more of these kinds of experiments. I just won't be able to submit any of my better papers there until we see what kind of reception its initial issues gets. I'm all for expanding the discussion and publication model; I just need to get tenure.
|
|
|
Post by costs on Sept 23, 2013 14:10:42 GMT -5
isitjustme,
I definitely get what you're saying, but you might be underestimating the "costs" associated with the traditional review process, at least if you assume every hour of work you do is worth something.
First, time between first submission and acceptance can be several years at some journals (and don't get me started on what happens if your paper is rejected at that point), and during that time your paper isn't being counted for promotion/tenure. Second, you have to factor in time spent on revision memos, rewriting, "robustness checks"/more analyses, etc.
If you're a junior faculty member, and you're trying to get tenure, I don't know whether a few hundred dollars upfront (when you have a conditional acceptance) really costs more than the time associated with months of review and revisions.
|
|
|
Post by teachingcollege on Sept 23, 2013 17:04:00 GMT -5
One problem with the open-access model, which is rarely talked about, is its effect on junior faculty at teaching colleges (SLACs or lower-tier public universities). Those faculty often have nothing in the way of institutional funding to help with these publication fees. At research universities, it is common for faculty members to get 'start up' funds, professional development funds, or other funding which can pay for these costs.
The old publishing model, where you had to work at an institution that could afford a subscription, limited one's access to information. The new model, where you have to pay to create the knowledge, limits the ability of some to create the knowledge in the first place. It differentially empowers some (i.e., those at top research universities) to publish their work while disadvantaging others. I'd hate to think we're entering a brave new world where those at teaching-focused schools won't be able to get their work published.
Yes, one can always pay these costs out of pocket, but it's pretty unreasonable to expect an Assistant Professor earning (for example), $45,000 with a west-coast cost-of-living to do so.
I think I like the old model better than this new open-access push.
Full disclosure: I work at a teaching-intensive public university with few funds for research.
|
|
|
Post by Monopoly Markets on Sept 23, 2013 17:47:48 GMT -5
One problem with the open-access model, which is rarely talked about, is its effect on junior faculty at teaching colleges (SLACs or lower-tier public universities). Those faculty often have nothing in the way of institutional funding to help with these publication fees. At research universities, it is common for faculty members to get 'start up' funds, professional development funds, or other funding which can pay for these costs. The old publishing model, where you had to work at an institution that could afford a subscription, limited one's access to information. The new model, where you have to pay to create the knowledge, limits the ability of some to create the knowledge in the first place. It differentially empowers some (i.e., those at top research universities) to publish their work while disadvantaging others. I'd hate to think we're entering a brave new world where those at teaching-focused schools won't be able to get their work published. Yes, one can always pay these costs out of pocket, but it's pretty unreasonable to expect an Assistant Professor earning (for example), $45,000 with a west-coast cost-of-living to do so. I think I like the old model better than this new open-access push. Full disclosure: I work at a teaching-intensive public university with few funds for research. === Agree with your points, and sympathize with your plight. But the old model isn't disappearing just yet. The question I'd ask is, how does the "old" (i.e., current) model serve the faculty you describe. Faculty at teaching schools have less time to do research. So, waiting 9 months to get crazy comments back that require loads of unnecessary re-running of models/obtaining useless literature to read so you can cite it/revising, and then to do all that work, all with the PROMISE that the resubmission will be sent to a new reviewer who can and mostly likely will raise new crazy things to re-run, obtain, and read . . .. And, while the teaching faculty spend 3 years in a process where a research professor might spend only 2, all that time there's the chance someone else publishes a paper that makes yours unpublishable. I think challenging the new model will pay dividends for teaching faculty primarily by forcing the old journals to start acting professionally, to start recognizing all that back-and-forth occurring between reviewers and authors should be IN the journals, and if some reviewer won't sign their name in public to their claim(s) then their claim(s) isn't worth excrement. It is very important that people look at ALL the costs, not just the most visible ones (i.e., the dollars it will take to publish the paper in journal X).
|
|
|
Post by Thoughts on Sept 24, 2013 6:31:51 GMT -5
Agree with your points, and sympathize with your plight. But the old model isn't disappearing just yet. The question I'd ask is, how does the "old" (i.e., current) model serve the faculty you describe. Faculty at teaching schools have less time to do research. So, waiting 9 months to get crazy comments back that require loads of unnecessary re-running of models/obtaining useless literature to read so you can cite it/revising, and then to do all that work, all with the PROMISE that the resubmission will be sent to a new reviewer who can and mostly likely will raise new crazy things to re-run, obtain, and read . . .. And, while the teaching faculty spend 3 years in a process where a research professor might spend only 2, all that time there's the chance someone else publishes a paper that makes yours unpublishable. I think challenging the new model will pay dividends for teaching faculty primarily by forcing the old journals to start acting professionally, to start recognizing all that back-and-forth occurring between reviewers and authors should be IN the journals, and if some reviewer won't sign their name in public to their claim(s) then their claim(s) isn't worth excrement. It is very important that people look at ALL the costs, not just the most visible ones (i.e., the dollars it will take to publish the paper in journal X). Having gone through the borderline-unprofessional behavior at AJS/ASR myself, I'm sympathetic with these complaints (which, I gather from people involved in SocSci, is really the main goal of the journal.) But to just amplify my original pause (which comes with a healthy dose of wanting to see how the journal turns out): I hope that SocSci does well according to the above framework, and contributes to a healthy journal ecosystem in sociology, but I do NOT hope it becomes the dominant model for publication in the field. Why not? (1) SocSci will "upend the field"/"the old model isn't disappearing just yet." This is an experiment that AJS/ASR (especially ASR) have forced. I was just given a paper to review for ASR that's a response to R&R with six reviews. SIX! Let's say that SocSci becomes everything the organizers dream--it could become another monopoly position: so fast, so nimble, so high impact that you "need" a publication there to make a splash (at least in the eyes of some faculty). Is that good? Just take a look at the flashy garbage that goes into PLOS and allied journals fairly frequently. (2) I note that folks mentioning the virtues of SocSci have voiced (aside from the review time issue, which I wholly agree with) mainly concerns to quantitative investigators: "robustness checks" and "someone else publishes a paper that makes yours unpublishable." I suppose it's possible that this happens for theory people, ethnographers, or historical comparativists, but I've never actually heard of that happening. But I have heard of it happening (with some frequency) for quantitative people working with just-released datasets or new statistical packages. This is not to diminish the gravity of this complaint, but rather simply to note that this seems to reinforce that SocSci is not meant to reflect the state of sociology as a whole, but rather one part of it. (3) Reviews should take place in public, because if someone won't sign their name to a review, it's not worth "excrement." I've gotten crazy reviews too, and have had this impulse. But it is a dangerous one to indulge. Imagine that Herr Weber were alive and submitted his latest hybrid HLM/Simulation/Experiment-in-the-field/Big Data paper on how deviance affects health outcomes and development in the global south. Herr Weber is a senior professor at Princeton and I (a lowly grad student) notice that there's a deep intellectual flaw in his whole project. Do I make that point publicly, and suffer the reputational damage, or is it really nice to have an anonymous venue to lodge those concerns? Generally, the latter. My point is that one original reason for anonymous review was developed precisely to protect the democratic impulse of scientific review and advance. (And besides, it's a caricature to say that this open exchange doesn't happen in journals today. Just look, for example, and the famous exchange between Wacquant and critics and more recently between Christian Smith and John Martin in Contemporary Sociology. They clearly demonstrate the value of public review and exchange! /sarcasm) The main concern that seems valid across the discipline is that of long review times impinge on people's tenure clocks (and the job market for grad students). Fair enough, and indeed a deep concern, but there has got to be a way to bring those times down for the field as a whole without on part of it seceding and forming its own publication venue. Say what you want about AJS/ASR, but reading those journals does have the virtue of forcing us to "see" things going on in almost the whole field, however much we may scoff at parts of it sometimes.
|
|
|
Post by fwiw on Sept 24, 2013 9:35:56 GMT -5
Fair enough, and indeed a deep concern, but there has got to be a way to bring those times down for the field as a whole without on part of it seceding and forming its own publication venue. I hear comments like this pretty frequently ("there has got to be a way"), and I agree that while a new publication venue is one way to get around it, there are others. Jerry Davis commented on orgtheory that the mean time to first decision at ASQ is 39 days, and I've heard that review times for many of the crim journals are pretty reasonable as well.
|
|
|
Post by Monopoly Markets on Sept 24, 2013 10:23:32 GMT -5
Fair enough, and indeed a deep concern, but there has got to be a way to bring those times down for the field as a whole without on part of it seceding and forming its own publication venue. I hear comments like this pretty frequently ("there has got to be a way"), and I agree that while a new publication venue is one way to get around it, there are others. Jerry Davis commented on orgtheory that the mean time to first decision at ASQ is 39 days, and I've heard that review times for many of the crim journals are pretty reasonable as well. Review times may be your main concern, but they are not the main concern of everyone. I could live with long review times if reviewers reviewed. I understand grad students and non-tenured faculty cannot. But, the main concern of tenured faculty (and, I suspect grad students and non-tenured faculty, too, once they think about it) is editors treating reviewers like they are co-authors. Sociology is not physics (thankfully). There is a high degree of disagreement. But editors often tell authors they have to convince the reviewers. This makes reviewers co-authors, and the reviewers know it, so some demand unreasonable things or keep changing their demands, all to keep the paper from ever being published (because, face it--reviewers are ALSO competitors in many cases; if your findings or perspectives question theirs, many sociologists will try to block the paper from publication. And, as the review process keeps adding reviewers, your chance of getting such a person increases over the process). And, if you have two reviewers who themselves have battled for years, good luck navigating between Scylla and Charybdis. Editors have it wrong. Authors are supposed to convince readers, not reviewers. Editors are supposed to use reviewers just to assure basic requirements are satisfied (e.g., logical requirements, study design). Editors turning reviewers into co-authors is key to why the process has become abusive. And editors can do this when authors have no place else to turn. I doubt any tenured prof would be upset enough about long review times to launch a new journal. But many tenured and non-tenured alike would be upset enough about being repeatedly allocated multiple shadow co-authors to say, "Screw it! Let's go elsewhere."
|
|
|
Post by Thoughts on Sept 24, 2013 13:23:47 GMT -5
I hear comments like this pretty frequently ("there has got to be a way"), and I agree that while a new publication venue is one way to get around it, there are others. Jerry Davis commented on orgtheory that the mean time to first decision at ASQ is 39 days, and I've heard that review times for many of the crim journals are pretty reasonable as well. Review times may be your main concern, but they are not the main concern of everyone. I could live with long review times if reviewers reviewed. I understand grad students and non-tenured faculty cannot. But, the main concern of tenured faculty (and, I suspect grad students and non-tenured faculty, too, once they think about it) is editors treating reviewers like they are co-authors. Sociology is not physics (thankfully). There is a high degree of disagreement. But editors often tell authors they have to convince the reviewers. This makes reviewers co-authors, and the reviewers know it, so some demand unreasonable things or keep changing their demands, all to keep the paper from ever being published (because, face it--reviewers are ALSO competitors in many cases; if your findings or perspectives question theirs, many sociologists will try to block the paper from publication. And, as the review process keeps adding reviewers, your chance of getting such a person increases over the process). And, if you have two reviewers who themselves have battled for years, good luck navigating between Scylla and Charybdis. Editors have it wrong. Authors are supposed to convince readers, not reviewers. Editors are supposed to use reviewers just to assure basic requirements are satisfied (e.g., logical requirements, study design). Editors turning reviewers into co-authors is key to why the process has become abusive. And editors can do this when authors have no place else to turn. I doubt any tenured prof would be upset enough about long review times to launch a new journal. But many tenured and non-tenured alike would be upset enough about being repeatedly allocated multiple shadow co-authors to say, "Screw it! Let's go elsewhere." This seems like a nice, articulate statement of the "evaluative, not developmental" credo of SocSci. Monopoly Markets, I ask out of genuine curiosity: does this rigmarole of appeasing multiple "shadow coauthors" happen that often? I can only speak from direct experience: acceptance and rejection at AJS/ASR and reviewing for both about a dozen times now. Each time I've gone through this process, the reviewers were generally really professional, not unfairly demanding, and the editors strongly steered away from those that were. In my own experience being reviewed (again, accepted AND rejected) the reviewers were actually really helpful and made the final product better. I've heard some folks complain about this process (and especially lately about the editorial incoherence of it at ASR) and have directly seen it now with the aforementioned kind-of-ridiculous invitation to be a high-single-digit reviewer for something. But all this seems like a failure of the editors, rather than reviewers, no? I'm genuinely curious about what the true institutional blocks are to reforming those practices. I also hear SocSci getting lots of buzz from quantitative/analytic/computational/etc folks, but I wonder what's there for folks who don't do that kind of work. (Again, I applaud them for leaving the door open to ethnography, interviews, historical, or whatever, but am skeptical about how they'll be evaluated in practice.)
|
|
|
Post by Monopoly Markets on Sept 24, 2013 19:04:04 GMT -5
This seems like a nice, articulate statement of the "evaluative, not developmental" credo of SocSci. Monopoly Markets, I ask out of genuine curiosity: does this rigmarole of appeasing multiple "shadow coauthors" happen that often? I can only speak from direct experience: acceptance and rejection at AJS/ASR and reviewing for both about a dozen times now. Each time I've gone through this process, the reviewers were generally really professional, not unfairly demanding, and the editors strongly steered away from those that were. In my own experience being reviewed (again, accepted AND rejected) the reviewers were actually really helpful and made the final product better. I've heard some folks complain about this process (and especially lately about the editorial incoherence of it at ASR) and have directly seen it now with the aforementioned kind-of-ridiculous invitation to be a high-single-digit reviewer for something. But all this seems like a failure of the editors, rather than reviewers, no? I'm genuinely curious about what the true institutional blocks are to reforming those practices. I also hear SocSci getting lots of buzz from quantitative/analytic/computational/etc folks, but I wonder what's there for folks who don't do that kind of work. (Again, I applaud them for leaving the door open to ethnography, interviews, historical, or whatever, but am skeptical about how they'll be evaluated in practice.) I take your question as a real question. I confess I do not know the answer as to how common it is. I would imagine such matters vary by area. Some areas most influential people see the journals as vehicles for hashing out issues, and they set the tone of the area. Some areas most influential people see journals as vehicles for (what I would call) hero worship, and they set the tone of the area. Surely this varies by area or even question. The question for us is, how effective is the field if editors give power to reviewers? I agree this is an editor issue, their the one's at fault, NOT the reviewers. Reviewers can only stake positions; editors decide to make you re-write your paper 5 times, holding out the carrot of publication and the stick of having to withdraw the paper and re-submit it elsewhere, starting all over with possibly the exact same reviewers. Two more responses. First, as for being scooped -- I wonder how many ethnographers are currently doing fieldwork on the post-2008 economic dislocations? Or on race after the rise of Obama? If your paper gets held up in review until 2016, how much literature will reviewers (and readers, IF the paper is accepted at that point) be able to add to your "must cite" list? How poor will your paper look if it ignores work that came out a year before yours, EVEN though you may have submitted your work first? Indeed, is it possible your work will become irrelevant with the delay? Relatedly, I just don't understand the claim that the problems SocSci is presumed to address are not relevant for theoretical, ethnographic, or historical work. The exact same problems identified above could occur for such work. For example, the author has a historical/comparative analysis of the failed Venezuelan coup of 2002, the successful Iranian coup of 1953, and the failed Soviet coup of 1991. A reviewer claims one can't make any sense of these without including the Chilean coup of 1973. No matter how many times you re-write the paper, this reviewer is not placated. The reviewer should be forced to make their case to the public (by writing their own rebuttal paper), not be allowed to hold up the paper. The value of comparison cases is a matter of field debate and only multiple conversation partners can establish it. It is not a matter of one person who happened to study case X holding all works who do not include that case (and thus may not cite their work) hostage. Or, the author has a formal theory to address issue X. Reviewer A says you MUST consider Bourdieu. Reviewer B says you MUST consider Coleman. This is ridiculous -- there is NO theorist that one MUST consider for anything. How many times did Bourdieu cite Coleman? How many times did Coleman cite Bourdieu? Theorist names reflect on-going conversations -- if either reviewer wants to change the conversation, write a paper, don't hold up someone else's paper. These things happen too many times to too many people in too many areas of the field. Finally, enough highly-visible people have had enough. We should applaud this effort, hope it adds to the set of publication possibilities, and support it even if we never submit anything to it by citing works in the journal so it becomes recognized as influential. If that happens, our dealings with other journal editors will tip in favor of authors. And THAT will be good for everyone.
|
|
|
Post by rollercoaster up on Jul 28, 2014 17:24:43 GMT -5
Based on what soc sci is publishing and who they are publishing, this is on the fast-track up the prestige rankings! I recommend submitting.
|
|