Jump to content
IGNORED

Naucni radovi zanimljivi za siru javnost


Filipenko

Recommended Posts

Sada cak nije ni potrebno reprogramiranje celija nazad u stem cell stanje nego moze direktno iz potkoznog fibroblasta sa pravi neuron ili srcana celija. Sledeci korak je uzgajanje citavih tkiva za testiranje chemikalija ili uzgajanje organa od ljudskih celija u zivotinjama.
upravo ovu mogucnost direktnog re-diferenciranja htedoh da napisem ljilji. Pre neki dan pricala sa devojkom iz susednog laba, i ona mi je pokazivala svoje "pulsing areas" u plejtovima. Dakle fibroblast --> mioblast, mislim da je u njihovom slucaju preko iPS celija, do pulsirajucih celija u kulturi. U kulturama razvijenim od srcanih pacijenata primecuje neregularan beat. Cool? :) Nije moja oblast, ali ona mi rece da jos nije videla da je to nekom uspelo do sad.
Link to comment
  • 2 weeks later...
upravo ovu mogucnost direktnog re-diferenciranja htedoh da napisem ljilji. Pre neki dan pricala sa devojkom iz susednog laba, i ona mi je pokazivala svoje "pulsing areas" u plejtovima. Dakle fibroblast --> mioblast, mislim da je u njihovom slucaju preko iPS celija, do pulsirajucih celija u kulturi. U kulturama razvijenim od srcanih pacijenata primecuje neregularan beat. Cool? :) Nije moja oblast, ali ona mi rece da jos nije videla da je to nekom uspelo do sad.
Uspelo je vec. Objavila Laugwitz grupa u Lancet-u i jos jedna druga grupa u Nature-u. Obe grupe se bave modeliranjem srcanih oboljenja in vitro i to je sada jedan od najpopularnijih pravaca razvoja iPS technologije i uopste reprogramiranja. Problem je sto su te pulsirajuce celije jos uvek suvise heterogene, ne odgovaraju tacno miocitima odraslih a kulture celija obicno predstavljaju mesavinu svega i svacega. Plus, jos se tacnop u detalj ne razume tacna ruta razvoja tih tkiva tako da je tesko razumeti genezu nekih bolesti.
Link to comment
  • 3 weeks later...
It can alone explain over 15% of the variation inGDP. The GDP maximizing size is around 13.5 centimetres, and a collapse in economicdevelopment is identified as the size of male organ exceeds 16 centimetres.
Ovo je skroz logicno. Kada muskarci dodju to velicine polnog organa za koju smatraju da je zadovoljavajuca, nestaje potreba da novcem (skupa kola, velika kuca, vazno mesto u firmi, velika plata, itd.) kompenzuju manjak centimetara radi impresioniranja suprotnog pola, i ekonomska aktivnost dakle pocinje da opada. A kada dodju do velicine koja ih pretvara u nonstop 'ebace, prestaju da rade bilo sta, i eto ti ekonomskog kolapsa.
Link to comment
  • 2 weeks later...

Ех, моделари, моделари... што се нисте држали лађица и авиончића. Него они баш запели да моделују климу и да доказују да нам следи жешће кување како се тај глобус буде грејао.А, међутим, ето противналаза одакле се не надаш: НАСА вели да се у ствари много боље ладимо него што ови узбунаши мисле: http://news.yahoo.com/nasa-data-blow-gaping-hold-global-warming-alarmism-192334971.html

Link to comment
A kada dodju do velicine koja ih pretvara u nonstop 'ebace, prestaju da rade bilo sta, i eto ti ekonomskog kolapsa.
ili kad dodju u fazu da ne mogu vise da iz'ebu sve svoje trebe prestaju da se reklamiraju da ne bi skapali nacisto :lol: anywayz, imam ja dobar jedan link http://fora.tv
Link to comment
  • 2 weeks later...

pre nekoliko meseci sam nekome (hazard?) u šali rekao da će se sledeća revolucija u telekomunikacijama dogoditi kada pocepamo Šenonovu teoremu kao svinja masnu kesu. Ovijeh dana se pojavio čovek koji tvrdi da mu je to pošlo za rukom; thy name is Steve Perlman. Na stranu bombastična izjava (da je pobedio Šenonovu teoremu), tip uopšte nije šarlatan: u ovom linku se navodi da je Perlman napravio Quicktime, osnovao webTv, i podseća se da ga neki zovu i "Edisonom silikonske doline" (a ko mu je tu Tesla?). Ako se mene pita, ja ništa ne verujem dok ne vidim matematiku, a matiša u ovom belom papiru koji je Stiv napisao - nema. Što je sasvim OK; on trenutno radi publicity stunt, a za to ne treba viša matematika. No, dok čekam da vidim crno na belo kako je zajebao Šenonovu teoremu, mogu iz rukava da navedem jedno 3 primera zašto ovo što je opisao u navedenoj beloj publikaciji ne radi baš uvek, ne radi baš svugde, i ne radi baš onako masovno kako on to očekuje. Al' mi se sviđa ime - DIDO. Ne, nema L, đabe naprežete oči.

Edited by ObiW
Link to comment
pre nekoliko meseci sam nekome (hazard?) u šali rekao da će se sledeća revolucija u telekomunikacijama dogoditi kada pocepamo Šenonovu teoremu kao svinja masnu kesu. Ovijeh dana se pojavio čovek koji tvrdi da mu je to pošlo za rukom; thy name is Steve Perlman. Na stranu bombastična izjava (da je pobedio Šenonovu teoremu), tip uopšte nije šarlatan: u ovom linku se navodi da je Perlman napravio Quicktime, osnovao webTv, i podseća se da ga neki zovu i "Edisonom silikonske doline" (a ko mu je tu Tesla?). Ako se mene pita, ja ništa ne verujem dok ne vidim matematiku, a matiša u ovom belom papiru koji je Stiv napisao - nema. Što je sasvim OK; on trenutno radi publicity stunt, a za to ne treba viša matematika. No, dok čekam da vidim crno na belo kako je zajebao Šenonovu teoremu, mogu iz rukava da navedem jedno 3 primera zašto ovo što je opisao u navedenoj beloj publikaciji ne radi baš uvek, ne radi baš svugde, i ne radi baš onako masovno kako on to očekuje. Al' mi se sviđa ime - DIDO. Ne, nema L, đabe naprežete oči.
informaticki perpetum mobile.ovo mi je omiljeni deo :)
The complete answer to this question is very long, involving immensely complex mathematics, verycarefully designed software and hardware, and new data communications and modulation techniques.
meni lici da je steva ostao bez kinte pa hoce da se ovajdi od VC.a na necemu sto (ne moze) da radi.

deo gde kaze da se sve zasniva na tome da se na pocetku testira bezicni kanal i da se na osnovu toga racunaju talasni oblici koji se salju do dido AP ima 3 skrivena problemcicaa) bezicni kanal se konstantno menja (posledica necega sto se zove rayleigh fading)b) mobilni terminal mora da posalje podatke visoke rezolucije (detaljni digitalizovani waveform primljenog test signala) nazad u centarc) prenos waveforma od centra do AP.a je takodjer veoma data-intensived) nije opisano kako mobilni terminal salje podatke AP (ovo lici na neki napredni broadcast)having said that: ako su pronasli i neko parcijalno resenje koje zaista duplira (ili triplira) kapacitet kanala :Hail: :Hail: :Hail:edit: sad sam ko sveta rimska inkvizicija iz pajtonovaca... 3 problema: abcd)

Edited by Zverilla
Link to comment

deo gde kaze da se sve zasniva na tome da se na pocetku testira bezicni kanal i da se na osnovu toga racunaju talasni oblici koji se salju do dido AP ima 3 skrivena problemcicaa) bezicni kanal se konstantno menja (posledica necega sto se zove rayleigh fading)b) mobilni terminal mora da posalje podatke visoke rezolucije (detaljni digitalizovani waveform primljenog test signala) nazad u centarc) prenos waveforma od centra do AP.a je takodjer veoma data-intensived) nije opisano kako mobilni terminal salje podatke AP (ovo lici na neki napredni broadcast)having said that: ako su pronasli i neko parcijalno resenje koje zaista duplira (ili triplira) kapacitet kanala :Hail: :Hail: :Hail:edit: sad sam ko sveta rimska inkvizicija iz pajtonovaca... 3 problema: abcd)

Moja 3 asa:

1. Ovo što Steva predlaže je u stvari MIMO primenjen na sve antene "u dometu" mobilnog. Za MIMO znamo kako radi: kolko antena, tolko muzike. A koliko antena (baznih stanica, AP-jeva, femtocells...) može jedan UE (mobilni, smartphone, laptop sa modemom) istovremeno da registruje/"čuje" bez specijalnih, niskošumnih pojačavača? Ne preterano mnogo. 2. Pošto je DIDO u stvari nx1 MIMO gde n teži beskonačnosti (kad bi se zezali, vidi pod 1), ja naprosto ne vidim kako će kapacitet koji se poveća n puta na pravcu AP-mobilni (downlink, što kažu seljaci u mom kraju), uopšte moći da se poveća u obrnutom smeru (uplink), kada mobilni i dalje ima samo 1 antenu? 3. Steva je pametno primetio da su "independant channels" srž problema DIDO (a bogami i MIMO) sistema. Koliko treba da budu razdvojene DIDO antene međusobno da bi smo stvarno imali "independant channels"? Odogovor: de-correlation distance je toliki da ne predstavlja problem u makro sistemima, ali predstavlja problem u mikro/piko/femto sistemima. A budućnost nije u makro sistemima.

Edited by ObiW
Link to comment
  • 3 months later...

Ovo i nije bas nesto najnovije, ali je zanimljivo: multisenzorna percepcija, kako cula medjusobno uticu jedna na druge. Zanimljiva je filozofska implikacija (o tome koliko su nase sopstvene percepcije nesigurne), mada mene jos vise zanimaju prakticne implikacije, tipa kako ne-organolepticki senzorni impulsi uticu na percepciju ukusa (i mirisa) hrane.

  • potato chips [are] crisper and better-tasting when a louder crunch is played back over headphones as [people] eat
  • a strawberry mousse tasted sweeter, more intense, and better when [eaten] off a white plate rather than a black plate.

Odavno je nama vinoljubiteljima jasno da fina vina treba piti iz odgovarajuce staklarije, mada je blizu pameti da se hemijski sastav ne menja i ako se vino pije iz (dobro oprane) tegle od ajvara. Ili, ispijanje sampanjca iz zenske cipele?PS. Kacim i rad koji je zakljucio da "the act of eating allows the different qualities of an object to be combined into a whole percept [+] [...] flavor is not defined as a separate sensory modality but as a perceptual modality that is unified by the act of eating"PS. I love this stuff.
Link to comment
  • 2 weeks later...
  • 4 weeks later...

ne nalazim prikladan topik, što me čudi, rasprave o evoluciji u dugoj historiji srpskih foruma su uvek bile najžučnije i među hot topicima.anyhow, još jedan ekser u kovčeg kreacionizma.

Biologists Replicate Key Evolutionary StepBut scientists in the University of Minnesota's College of Biological Sciences have replicated that key step in the laboratory using natural selection and common brewer's yeast, which are single-celled organisms. The yeast "evolved" into multicellular clusters that work together cooperatively, reproduce and adapt to their environment -- in essence, precursors to life on Earth as it is today.
Edited by Gonzo
Link to comment
  • 2 weeks later...
Breakthrough: The first sound recordings based on reading people’s mindsNeuroscientists have developed a way to listen to words you've heard, by translating brain activity directly into sound. Their findings represent a major step towards understanding how our brains make sense of speech, and are paving the way for brain implants that could one day translate your inner thoughts into audible sentences.

Every language on Earth is made up of distinct acoustic features. The volume or rate at which syllables are uttered, for example, allow our minds to make sense out of speech. How the brain identifies these features and translates them into relevant information, however, remains poorly understood.UC Berkeley researcher Brian Pasley and his colleagues wanted to see what features of human speech, if any, could be reconstructed by monitoring brain activity.Neuroscientists call this form of brain analysis — which is commonly construed as mind-reading — "decoding." If this study sounds familiar to you, it might be because last year, another team of scientists was able to decode images observed by volunteers by monitoring activity in the brain's primary visual cortex. What Pasley's team was trying to accomplish was quite similar, only they wanted to translate their volunteers' brain activity into auditory information. This, in turn, would require looking at a different region of the brain.But that's not the only thing Pasley's team did differently. The scientists who last year reconstructed visual information used a popular brain-scanning method called functional magnetic resonance imaging (fMRI). And while fMRI is an incredibly useful way to monitor brain activation, it isn't actually the most direct method out there. Pasley's team wanted to get as close to their volunteer's brain waves as possible.Full sizeBy seeking out patients already scheduled to undergo brain surgery, the researchers were able to place electrode nets directly onto the brains of 15 conscious volunteers (similar to the setup seen here) over a region called the posterior superior temporal gyrus (pSTG), which is believed to play a crucial role in speech comprehension. The volunteers then listened to a series of pre-recorded words for five to ten minutes, while their brain activity was recorded via the electrode nets. [intracranial electrodes via]Pasley then created two computational models that could convert the electrode readings back into sound. This allowed the researchers to predict what word the volunteer had been listening to when the brain activity was recorded.You can listen to some examples of the recordings here. The first word you'll hear is "waldo" — it's the version the volunteers heard as they were having their brain activity monitored. The next two sounds you'll hear are the versions of "waldo" that were reconstructed using the researchers' two different algorithms. This process is then repeated for the words "structure," "town," "doubt," "property," and "pencil." For each word, the true sound will play first, followed by the versions that have been reconstructed from brain activity. [The image up top features spectrograms that were created to compare the accuracy of the six reconstructions you've heard here to their original sounds]Pasley and his team reconstructed 47 words in total. Using these reconstructions, he and his colleagues were able to correctly identify the words that their volunteers had listened to almost ninety percent of the time. Of course, the researchers also knew which words to listen for — but the fact that they could reconstruct something from brain waves at all is very impressive.The ability to convert brain activity into usable information — be it audio or imagery — has a long, long way to go before we're reading one another's thoughts, but its potential applications have scientists racing to make it happen; and that's because these applications are as inspiring as they are unsettling.In its present state, this technology cannot eavesdrop on an internal monologue playing out in your head; it can't be used to squeeze information out of an uncooperative murder witness; and it can't translate the thoughts of a stroke patient struggling to speak — but it could, and soon. How soon will likely depend on how similarly the brain handles the tasks of perceiving auditory information, and imagining it."Potentially, the technique could be used to develop an implantable prosthetic device to aid speaking, and for some patients that would be wonderful," said Robert Knight — a senior member of Pasley's team — in an interview with The Guardian. "The next step is to test whether we can decode a word when a person imagines it.""That might sound spooky, Knight says, "but this could really help patients. Perhaps in 10 years it will be as common as grandmother getting a new hip."The researchers' findings are published in the latest issue of PLoS Biology.All images, audio via Pasley et al unless otherwise indicated

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...