Jump to content
IGNORED

tvrdnja dana


betty

Recommended Posts

tvrdnja dana:Female physics students may improve their grades by reflecting upon and writing about the values most important to them. True or False?

Link to comment
  • Replies 372
  • Created
  • Last Reply

Top Posters In This Topic

  • betty

    79

  • Indy

    65

  • Turnbull

    25

  • gagorder

    20

Top Posters In This Topic

app je los, ali nazalost nije uskljucivo do njega. psihologija poslednjih par godina prolazi kroz veliku krizu, koja je posebno naglasena u oblasti iz koje su app-pitanja. doslo se do toga da je nepoverenje tako poraslo da su ljudi krenuli da ponavljaju stara istrazivanja, redom, cesto neuspesno, i da mnogi koncepti u koje se verovalo polako padaju u vodu.
Imas li mozda neki clanak o tome, neobavesten sam a zanima me? Edited by Radagast
Link to comment
tvrdnja dana:Female physics students may improve their grades by reflecting upon and writing about the values most important to them. True or False?
- oko 90% clanaka potvrdjuje hipoteze postavljene u uvodu- hipoteza je ovde gotovo sigurno bila gore izneta tvrdnja=> 90% sansi da je tvrdnja true
Link to comment
Imas li mozda neki clanak o tome, neobavesten sam a zanima me?
nemam jedan clanak, ali zato imam mnostvo...prvo, jedan pozitivan tekst koji kaze da ce sve biti u redu i da nije prvi put da nauka napravi zaokret i prestane da veruje u neku ideju.ja sam pocela da se interesujem za temu kad je daryl bem (veliko ime) objavio studiju u kojoj tvrdi da prekognicija postoji. to samo po sebi nije toliki problem (mada...), koliki je problem sto su ljudi koji nisu replikovali njegove nalaze imali teskoce da objave svoje nereplikacije.sledeci momenat krize je kad je otkriveno da je diederik stapel, isto veliko ime, izmislio podatke za 55 svojih objavljenih studija. ljudi koji laziraju nalaze postoje u svim disciplinama i svim zemljama, ali holandski psiholozi su uradili nesto sto do sada nije radjeno: umesto da ga otpuste i zataskaju stvar, oformili su komisiju koja je godinu dana brizljivo kopala po svim njegovim objavljenim radovima, pricala sa svim njegovim saradnicima do kojih je mogla da dodje, i napisala opsiran izvestaj o metodama laziranja rezultata. izmedju ostalog, tvrde da je stanje u nauci delimicno pomoglo da se objavi toliko laziranih studija (vidi spoiler). najveca vrednost je sto daju i savete (u posebnom izvestaju, a pomalo i u ovom) kako se jednostavnim statistickim testovima moze proveriti da li su podaci smisleni. tu i tamo vidim da je imalo efekta.

Verification bias arises in the investigated publications in a variety of ways, the most important of which are enumerated below.• An experiment fails to yield the expected statistically significant results. The experiment is repeated,often with minor changes in the manipulation or other conditions, and the only experiment subsequentlyreported is the one that did yield the expected results. It is unclear why in theory the changes madeshould yield the expected results. The article makes no mention of this exploratory method; theimpression created is of a one-off experiment performed to check the a priori expectations. It shouldbe clear, certainly with the usually modest numbers of experimental subjects, that using experimentsin this way can easily lead to an accumulation of chance findings. It is also striking in this connectionthat the research materials for some studies shows the use of several questionnaire versions, but that theresearchers no longer knew which version was used in the article.• A variant of the above method is: a given experiment does not yield statistically significant differencesbetween the experimental and control groups. The experimental group is compared with a control groupfrom a different experiment –reasoning that ‘they are all equivalent random groups after all’ – and thusthe desired significant differences are found. This fact likewise goes unmentioned in the article.• The removal of experimental conditions. For example, the experimental manipulation in an experimenthas three values. Each of these conditions (e.g. three different colours of the otherwise identicalstimulus material) is intended to yield a certain specific difference in the dependent variable relativeto the other two. Two of the three conditions perform in accordance with the research hypotheses, buta third does not. With no mention in the article of the omission, the third condition is left out, both intheoretical terms and in the results. Related to the above is the observed verification procedure in whichthe experimental conditions are expected to have certain effects on different dependent variables. Theonly effects on these dependent variables that are reported are those that support the hypotheses, usuallywith no mention of the insignificant effects on the other dependent variables and no further explanation.• The merging of data from multiple experiments. It emerged both from various datasets and interviewswith the co-authors that data from multiple experiments had been combined in a fairly selectiveway, and above all with benefit of hindsight, in order to increase the number of subjects to arrive atsignificant results.• Research findings were based on only some of the experimental subjects, without reporting this in thearticle. On the one hand ‘outliers’ (extreme scores on usually the dependent variable) were removedfrom the analysis where no significant results were obtained. This elimination reduces the variance ofthe dependent variable and makes it more likely that ‘statistically significant’ findings will emerge.There may be sound reasons to eliminate outliers, certainly at an exploratory stage, but the eliminationmust then be clearly stated.• Finally entire groups of respondents were omitted, in particular if the findings did not confirm theinitial hypotheses, again without mention. The reasons given in the interviews were ad hoc in nature:‘those students (subjects) had participated in similar experiments before’; ‘the students just answeredwhatever came into their heads’, but the same students had in the first instance simply been acceptedin the experiment and the analysis. If the omitted respondents had yielded different results they wouldhave been included in the analyses.• The reliabilities of the measurement scales used and the reporting thereof were often handledselectively, and certainly to the experimenters’ advantage, in the sense of confirming the researchhypotheses. It is impossible to ‘test’ hypotheses with unreliable measurement instruments, andtherefore, for example, the reliability was estimated for a part of the research group, with the unreportedomission of ‘unreliable’ subjects. Or items were selected differently for each study in such a way, whichdid lead to a reliable instrument, but with no awareness that that this was achieved at the expense of themutual comparability of studies.• Where discrepancies were found between the reliabilities as reported by the researcher (usually thealpha coefficient) and as calculated by the statisticians, the reported values were usually conspicuouslyhigher. If the reliability of a dependent variable or a covariate was not reported, its value often turned out to betoo low relative to the accepted standard (e.g. less than 0.60).• Sometimes the reliability was deliberately not reported, in particular if it was extremely low. Forinstance, a co-author reported that the supervisors urged that the data be sold as effectively as possible,and discouraged attempts to undermine the data, which might make editors and reviewers suspicious. Ifthey asked any questions the missing data would be provided later.• There was also selective treatment of the measurement scales, depending on what it was required toprove. Variously one or two dimensions (underlying variables) were used with the same set of items indifferent experiments, depending on what was most expedient in the light of the research hypotheses.• The following situation also occurred. A known measuring instrument consists of six items. The articlereferred to this instrument but the dataset showed that only four items had been included; two itemswere omitted without mention. In yet another experiment, again with the same measuring instrument,the same happened, but now with two different items omitted, again without mention. The onlyexplanation for this behaviour is that it is meant to obtain confirmation of the research hypotheses.It was stated in the interviews that items that were omitted did not behave as expected. Needless to say,‘good’ ad hoc reasons were given, but none were mentioned in the publication, and the omissions couldbe ascertained only by systematically comparing the available survey material with the publication.• When the re-analysis revealed differences in significance level (p values) when applying the samestatistical tests, it was usual for the values reported in the articles to be ‘more favourable’ for theresearcher’s expectations. Similarly, incorrect rounding was also found, for example: p = 0.056 becamep = 0.05.

zatim, john bargh, opet veliko ime, dozivljava krizu srednjih godina i pise otrovni post na svom blogu kad neki noviji istrazivaci nisu uspeli da repliciraju njegov efekat stereotype priminga. njegova studija tvrdi da citanje reci koje te asociraju na starost, cine da sporije hodas (tj. da se ponasas staracki). replikatori tvrde da to vazi samo ako eksperimentator ne zna ko je od ispitanika u grupi koja je citala reci koje asociraju na starost. on je u medjuvremenu sklonio svoj gadni post, ali tu i tamo ima blogova gde se prica o tome. poslednji slucaj, opet stereotype priming, istrazivac je ap dijksterhuis (nije bas toliko veliko ime), i opet otrovan odgovor na nereplikaciju, koji je valjda isto sklonjen s neta ali evo senzacionalistickog nature clanka o tome, a evo i bloga na kom se prica o problemu nedostatka jasne teorije, inspirisanog dijksterhuisovim slucajem. to je poslednje sto sam citala na ovu temu, pa je i moj prethodni post dosta inspirisan time. licno, sokantno mi je koliko su pocetne hipoteze nekad... glupe.i evo jos jednog nature clanka, o replikacijama u psihologiji i pozitivnim nalazima. tu se prica i o konceptualnim replikacijama, koje su u sustini ono sto ja zovem akcenat na tumacenju podataka a ne na podacima: kad neko nadje nesto sto bi moglo da se uklopi u tumacenje nekog drugog, to se smatra konceptualnom replikacijom, bez obzira sto nije istrazivana ista stvar.kao odgovor na sve to, javljaju se inicijative za povecanje metodoloskih standarda u naucnoj psihologiji, instrumenti za proveru validnosti efekata koji su objavljeni u mnogim clancima (a gde mozda nedostatak efekta nije objavljen), inicijative za repliciranje poznatih rezultata, i uopste, metodologija istrazivanja je tema koja trenutno privlaci dosta paznje.

Edited by betty
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...