Wednesday, 12 December 2007
And I do not mean monsters with tentacles (that's very normal), nor the fact that this life may be based on silicon not carbon (very unlikely, but still just chemistry).
The life would be `self restrained'. And that would be really strange. Of course the most probable solution is that there is NO life on Mars. Then again there is nothing strange.
Where does this prediction come from? Well, consider Earth. What is the probability that a spaceship would land on Earth and fail to discover life? pretty much close to zero. Earth life has infiltrated every niche, from abysses of the oceans to the volcanic pits. Deserts of Sahara are full of life.
Earth life even managed to invade meteorites coming down from space.
With the signs of life-friendly (at least at some time) environments on Mars, if we assume that life there would be similar in principle to ours, it is strange that the planet would not be just as inhabited as Earth is. Of course this might be just some micro(nano)organisms or some other forms, but why are they not everywhere? The `principle' I have in mind is replication, because this is the motor force for the ultimate conquest of Earth by living organisms.
So, if there is life on Mars and if it has not conquered the whole planet, then it must be very strange: refraining from the `go forth and multiply' rule.
It has grown well out of proportion, because QM is not only really weird, it is also full of interesting human stories. Here I just signal a very very interesting book for all of those who are interested in the origin of the quantum debates: the Solvay Conference in 1927.
Bacciagaluppi, G. & Valentini, A. Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference , Cambridge University Press, 2007. The book is available on the arXiv repository. Very interesting reading!
Friday, 23 November 2007
Yes, I know that the IPCC reports are difficult to read in their entirety. But they are at least worth leafing through. Estimates, analyses and predictions are much more sober. Go to the source, whenever you can. Journalists and commentators (including myself) are not reliable sources.
But there is one thing that I found missing in the IPCC report. The clear indication of the cause of the human induced component to the global warming. Yes, I do believe that there is such a component. I also believe that there are huge natural forces components. And in the past these have been much stronger than what we predict for the next 100 years. During the recent glaciation (only 25-20 thousand years ago) the sea level was 80 meters lower!
What I found missing is a very simple comparison, between the factors considered to measure the global warming (such as the concentration of CO2) and ... human population. The correlation is clearly visible.
(greenhouse gases, 18 000BC to present day, from IPCC WEB site)
(human population, 10 000 BC to present day, WIKIPEDIA)
But what is my point? That out of political correctness the IPCC report does not make a simple statement (at least I did not find it): human induced global warming factors are directly resulting from the explosive growth of human population. Not something that we do wrong, in trying to live thin the best conditions that we can, driving cars and using electricity, and eating and heating our houses. Simply by being too numerous. The difference between the per capita emission of CO2 in US and in India is a factor of 10. This is huge, I admit. But the growth of the population in the last 10 000 years was by a factor of 1000 - three orders of magnitude. Moreover, the overpopulation resulted in humans filling every continent, every niche, every environment, and turning them to our own purposes. I am deeply worried that by omitting this simple correlation, by concentrating on cars, and devices left on standby as ways to reduce our `carbon footprint' we are only fooling ourselves. It would but allow the population to grow more. There are simply too many humans.
I am reminded of the little speech of Agent Smith in the Matrix (should he get a Nobel Prize too?):
I'd like to share a revelation I've had during my time here. It came to me when I tried to classify your species. I realized that you're not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with their surrounding environment, but you humans do not. You move to another area, and you multiply, and you multiply, until every natural resource is consumed. The only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet.
Sunday, 11 November 2007
Tuesday, 6 November 2007
RECEIVING the Nobel prize does not necessarily stop great scientists making foolish statements. William Shockley won a Nobel for his work on transistors, but nevertheless managed to spend the latter years of his career making racist comments and even writing about the mental inferiority of black Africans.
Last week, James Watson, co-recipient of a Nobel prize for the discovery of the structure of DNA, made blatantly racist comments regarding the supposed mental inferiority of black Africans. The response has been swift. His comments were widely condemned and he was suspended from his post at Cold Spring Harbor Laboratory. Unlike Shockley, Watson later apologised for his remarks.
But what of the research in this area? Does the condemnation of Watson's words stem from solid science or from political correctness?
The goal of the comment is, I guess, not just another attack on Watson. It is to show that
The problems with our understanding of intelligence and race show that the criticism being levelled at Watson is based on science rather than political correctness.
The two arguments used are our problems with race as a socially constructed concept, not a biological one. It derives from people's desire to classify. The second argument is our poor understanding of intelligence. Thus talking about racial differences in intelligence is doubly suspect, and thus unscientific.
But let me compare this with the statement of Watson and then apply a bit of logic (still, I guess, a part of science). The offensive statement of Watson, as far as i was able to track it, was:
I am inherently gloomy about the prospect of Africa because all our social policies are based on the fact that their intelligence is the same as ours - whereas all the testing says not really.
Now, even professor Sternberg admits in his comment that there are differences in measured values of mental capabilities between various `groups'
The tests as they stand show some differences between various groups of children. The size of the differences and what groups do best in the tests depend on what is tested. For example, with various collaborators I have found that analytical tests of the kind traditionally used to measure so-called general abilities tend to favour Americans of European and Asian origin, while tests of creative and practical thinking show quite different patterns. On a test of oral storytelling, for example, Native Americans outperform other groups.
So there are differences. Whether the concept of race is well defined or poorly is beside the point - there is quite a lot to discuss on the subject. But to be able to absorb science based solutions to the problems of great social importance requires exactly such analytical capabilities, which even Sternberg admits might be real.
But let's assume, contrary to the cited findings, that there are no genetic differences in analytical intelligence between `groups of different origin'. That it is all due to upbringing and education. Certainly this component of the differences would be of great importance when we compare the US and African societies. But does it change the message of Watson even by an iota? Shouldn't we reconsider our social policies? After all, we do adjust the way we present things to different people on a daily basis, the best universally accepted example being the gradual way science is introduced in schools. And there are no cries of horror that we treat seven year olds as having different analytical capabilities than university students. No accusations of `childism'.
So, even if all the difference is intelligence shown by inhabitants of Africa are due to environment, is it not even a stronger reason to adjust our social policies? In most places all the schooling their kids receive is practical thinking and oral storytelling: tribal tradition (including the inter tribal violence) and handling a Kalashnikov. This certainly does not help in understanding, for example, the complex issues of environment protection and economic growth in harmony with Nature. Yet, even in the countries where situation is better and real schools are accessible, there are voices to get rid of the Eurocolonialist science. For example for replacing mathematics with `ethnomathematics'. And as far as I have been able to track it down, this new science still has to produce any significant result apart from the fancy name. As a result of such mistreatment `the poor would get poorer'. Ant this is hardly the result we all desire.
In this context, the call of Watson to adjust our policies to reality of a different situation, whatever the reason of the difference, is hardly racist. I argue that the attacks out of political correctness, not science. That the race and intelligence are complex and multivalued notions does not inhibit any knowledge about them. And logical reasoning is still a part of science. So lets stop acting out of the gut feelings and perhaps consider the issue logically.
Some time ago, I have found a clearly pseudoscientific work of Ilia Barukčić. The first link to it (through a search at google) was in a short note on a mail archive of the mailing list for the cygwin project. Far, far away from quantum mechanics. But the link has led me to a WEB page of supposedly peer reviewed Causation: International Journal Of Science. The front page boasts exploding graphics with a title Bell's theorem ... refuted! in one inch letters. Inside one finds two papers (claimed to be peer reviewed) by Ilija Barukčić: Bell's theorem. A fallacy of the excluded middle} and Helicobacter pylori: the cause of human gastric cancer. Perhaps not surprisingly, the Editorial Board consists of - you probably guessed - Ilija Barukčić!
With a great surprise I have opened the new issue (Nov 3rd) of New Scientist. In only slightly smaller letters the cover declared `NOT SO SPOOKY. Was Einstein right about quantum theory?'
And the article pointed to a disproof of Bell Theorem by Joy Christian. The disproof is, according to the New Scientist, based on the use, for the observed values, not `normal numbers' but Clifford algebra. Well, I have not yet read the original papers (which might be found here and here and here).
When I do, I'll try to grok some sense out of the whole matter.
he has managed to publish, in a psychology journal Charaktery, an article on morphic resonance. Most of the `facts' in the article were completely false. Not only did the journal Editors check the data, but they actively `helped' to write the article, by proposing to add to it pirated excerpts from and old review of Rupert Sheldrake. This goes beyond the stupidity of Lingua France editors, who could not tell science from pseudoscience, here a journal boasting more than twelve professors and PhDs in its board, actively worked to make the hoax `better'.
Unfortunately, the WEB page devoted to the issue in in Polish, but maybe there are some readers whou do want to have a look.
Saturday, 27 October 2007
I am inherently gloomy about the prospect of Africa because all our social policies are based on the fact that their intelligence is the same as ours - whereas all the testing says not really.
First Watson talks in the UK were cancelled, then his Institution, Cold Spring Harbor Laboratory forced him to resign (here are the CSHL and Watson's official announcements).
But I urge you to look at the statements, even as they are taken out of context (and I am pretty sure that whoever picked them up, did pick the worst parts of the whole viewpoint), not the conditionals and caveats. So what: statement that some people (as measurements say) might be less intelligent than others and that our policies should reflect this?
But we are doing it everyday. What and how we teach children in schools has to be based on assumption of the differences in capability to absorb and use information. Trying to teach first graders straight university level science would result in ... catastrophe. But suggesting that the problem might be more general is racism.
Of course, the fact that 95% of NBA players are black has nothing to do with racism. Racism works only one way.
The second piece of news was the massacre of gorillas by some African militia. Apparently to train themselves, for shooting practice, they have killed some tens of the great apes. For fun. In a recent report I have read that 1/3 of the primate population is in danger, directly because of human activity. Now, I am going to be racist again: think who is organizing the preservation areas and trying to save the apes, and who is using them as targets (good ones: they move, but they won't shoot back!)
The third piece of news is a statement by an UN expert, Jean Ziegler (the UN Special Rapporteur on the right to food), who has stated that
It is a crime against humanity to convert agricultural productive soil into soil which produces food stuff that will be burned into biofuel.
Ziegler claims that all causes of hunger are man-made, it’s a problem of access, not overpopulation or underproduction, and can be changed by human decision.
He noted that from 1972 to 2002, the number of gravely undernourished people in Africa increased from 81 million to 202 million, and every day hundreds of Africans “take to the sea” fleeing from hunger.
He called on the UN Human Rights Council “to declare a new human right” to protect those who flee from hunger.
The right to food is defined as the right to have regular, permanent and unrestricted access, either directly or by means of financial purchases, to quantitatively and qualitatively adequate and sufficient food corresponding to the cultural traditions of the people to which the consumer belongs, and which ensures a physical and mental, individual and collective, fulfilling and dignified life free of fear.
When we look at the statement above we can hardly disagree. Or can we? Is the right applicable regardless of the size of human population? How can we be sure of adequate and sufficient food corresponding to the cultural traditions of the people, when the density of these people has increased (since beginning of XX Century) more than 30 times? Should we not look at the possibility that a change in the culture would be advisable? That the "laws" should have physical possibility of being implemented?
Ziegler calls for a five year ban on production of biofuels. Perhaps a five year `restraint' on human population growth in some places would be more sensible? To take care of the people who live there today and to take care of the environment for their future descendants?
Sunday, 21 October 2007
resurrection of human consciousness. Is there a place
for physics, neuroscience and computers?' by Vadim Astakhov.
The paper is an incredible mixture of two languages. On one side we find:
resurrection, mind uploading, time tunneling and teleportation.
On the other we find a lot of equations (I doubt very strongly as to their applicability) and even longer list of physical concepts deemed to be applicable to the topic:
Riemannian metric, Ricci tensor, Euler-Lagrange equations, Lia-algebra (sic!), generator of infinitesimal transformation, Renormalization group, Holographic representation, etc. etc.
as well as relatively new notions:
Stoichiometric matrix, auto poetic functionality, geometric networks, information geometry, causality circuit.
Perhaps I am wrong, but the first attempt to read the paper did lead me to a conclusion that this is a complete mumbo-jumbo. Perhaps some Reader of this blog would explain why and how the metric for the network system should be Riemannian (Section 2), so that the theory "resembles" General Relativity. And how would this be related to Section 7, when the system is described in quantum way, strangely resembling the one used for Bell theorem. Or why `total Fidelity-information is conserved. This is something like Energy Conservation Law for information systems'.
I'd like to apply this law, because I am constantly forgetting things.
However, the misuses seen by me in this particular paper are of secondary nature. This is a free world, especially when it comes to WEB publications. This blog is a perfect proof of the freedom.
But the arXiv publication has a note that the work has been submitted to conference Toward a Science of Consciousness 2008. I've looked up the conference site, and as the organizers (Arizona University) claim, it is to be a place to present `intense, far-ranging and rigorous discussions on all approaches to the the fundamental issue of how the brain produces conscious experience'.
I shall try to keep an eye on the list of accepted papers. And if Astakhov makes it, then I would have to redefine my notion of the word rigorous. Or of the word science?
Monday, 15 October 2007
Although he openly admits that he is not the first to propose the ideas, the way they are presented is quite interesting. The proposal is based on two hypotheses. The first of these is close to my heart, while the other one, is, hmmm, courageous. What are the two hypotheses?
External Reality Hypothesis (ERH):
There exists an external physical reality completely independent of us humans.
Mathematical Universe Hypothesis (MUH):
Our external physical reality is a mathematical structure.
ERH seems to be quite widely accepted, although in the light of Quantum Mechanics results we must remember that the external physical reality might be very different from our intuitions. It might be nonlocal, unreal, quantum - pick the name you prefer. The key point, which I hold dear, is independent of us humans.
Now, the aim of the papers is to argue for the necessity of MUH. There are great reasons for this: it would solve the mystery of why is the Universe, in so many of its aspects, so well describable by mathematics.
It could also change our perspective on the Theory of Everything and science:
If the mathematical universe hypothesis is true, then it is great news for science, allowing the possibility that an elegant unification of physics and mathematics will one day allow us to understand reality more deeply than most dreamed possible. Indeed, I think the mathematical cosmos with its multiverse is the best theory of everything that we could hope for, because it would mean that no aspect of reality is off-limits from our scientific quest to uncover regularities and make quantitative predictions.
However, it is even more difficult to break the bounds of our limited imagination and intuitions and perceive our universe as some mathematical structure, which is by definition an abstract, immutable entity existing outside of space and time. What is `mathematical structure' anyway? Set of abstract objects and rules that connect them? How complex should that structure be to describe the seemingly infinite variety of our observations, the probable complexity of the Universe? Most people, even physicists and mathematicians, would not venture into his abstract space at all.
But - why not? Our human perspective is rather limited and inadequate. Even for `almost normal' phenomena. It suggests that Sun circles the Earth when we observe it moving across the sky. It does not help us in understanding how a light bulb works, or a hard disk in our computer. Why should our intuitions be more usable when we talk about the question of ultimate reality? Tegmark even uses our inadequacy as a `proof' of the Mathematical Universe hypothesis (although I detect some measure of tongue-in-cheek there):
Ultimately, why should we believe the mathematical universe hypothesis? Perhaps the most compelling objection is that it feels counter-intuitive and disturbing. I personally dismiss this as a failure to appreciate Darwinian evolution. Evolution endowed us with intuition only for those aspects of physics that had survival value for our distant ancestors, such as the parabolic trajectories of flying rocks. Darwin’s theory thus makes the testable prediction that whenever we look beyond the human scale, our evolved intuition should break down.
We have repeatedly tested this prediction, and the results overwhelmingly support it: our intuition breaks down at high speeds, where time slows down; on small scales, where particles can be in two places at once; and at high temperatures, where colliding particles change identity. To me, an electron colliding with a positron and turning into a Z-boson feels about as intuitive as two colliding cars turning into a cruise ship. The point is that if we dismiss seemingly weird theories out of hand, we risk dismissing the correct theory of everything, whatever it may turn out to be.
It is like saying `it must be true because I do not understand it'. Well, perhaps it is worthwhile to remember our limitations and to be sure that we do not dismiss some possible solutions because of these limitations.
But the papers are certainly worth reading.
Sunday, 30 September 2007
Well neither did I, until today. Neither does Wikipedia.
Who is he?
Well, Today I found two arXiv papers of Jeroen van Dongen:
Emil Rupp, Albert Einstein and the canal ray experiments on wave-particle duality: Scientific fraud and theoretical bias
The interpretation of the Einstein-Rupp experiments and their influence on the history of quantum mechanics
It seems that
In 1926 Emil Rupp published a number of papers on the interference properties of light emitted by canal ray sources. These articles, particularly one paper that came into being in close collaboration with Albert Einstein, drew quite some attention as they probed the wave versus particle nature of light. They also significantly propelled Rupp’s career, even though that from the outset they were highly controversial.
In 1935 Rupp very publicly retracted no less than five of his scientific publications from the previous year. The articles dealt with such subjects as the polarization of electrons and the artificial production of positrons.2 Rupp published his retraction in a short notice that appeared in the Zeitschrift für Physik. He stated that his withdrawals were the result of an illness and supplied a medical opinion—by a “Dr. E. Freiherr von Gebsattel” — in support of his claim:
"Dr. Rupp had been ill since 1932 with an emotional weakness (psychasthenia) linked
to psychogenic semiconsciousness. During this illness, and under its influence, he has, without being himself conscious of it, published papers on physical phenomena [...] that have the character of ‘fictions.’ It is a matter of the intrusion of dreamlike states into the area of his scientific activity".
Well, it seems to be a very interesting part of physics history. One that was curiously absent from most accounts. For example, Abraham Pais biography of Einstein, The Lord is Subtle, does not mention Rupp at all. The only other publication mentioning Rupp I was able to find at short notice was A. P French, The Strange Case of Emil Rupp (unfortunately not available freely). The abstract of paper by French reads:
Physics has seldom had to deal with claims of alteration or fabrication of data, such as have troubled biological and medical research in recent years. The case of Emil Rupp in the 1930s was, however, a notable exception. The present paper revisits this case, adding in certain areas to earlier accounts of Rupp and his work. The case is not without significance, because Rupp's publications carried considerable weight during a historically important era of 20th-century physics.
My usual source, Google Scholar lists 8 papers by Emil Rupp (half of this list being US Patents.
I wonder why is this case so hushed up? If, indeed, this is a case of fraud in physics, is it a sign of shame by physicists that they - within the most concrete of natural sciences - let through the fraud?
Perhaps You can supply with more links?
Monday, 17 September 2007
I have not formed my opinion yet. But perhaps there is someone among the readers who has interesting ideas, suggestions of `must read' papers etc?
Sunday, 9 September 2007
I have read it and I have to express my gratitude for the availability. As before, the clarity, wit and understanding of human nature are great. Buy the book, if you can afford. Read it anyway.
Friday, 31 August 2007
For those of you from abroad, who (rightly so) are not really interested in Polish power struggle, I owe a brief exposition: the Polish government, since about a month led by minority Law and Justice party is making crazier and crazier decisions. Should you want to look, with imparial eye on the situation, here is the Reuters news bite Polish government critic detained, opposition outraged.
The opposition and human rights groups say the government's anti-corruption drive has turned into a witch-hunt in which anyone who does not share the ruling party's views is branded a criminal or a traitor to Polish interests.
But this blog is not political (although I do have very definite political opinions). So why do I mention the topic? It is because of the statement issued by the Prime Minister to Gazeta Pomorska, in which he reiterates not only his opinions about the existence of `the system' (układ) - setup of criminal activities linking everybody to everybody (excluding only his closest collaborators). This is old story. But the interview confirms that the government conducts a scientific investigation using computer model that shows that the układ is real!
Wow! And I called for developing the model only two days ago! There must be really powerful people reading my blog.
Wednesday, 29 August 2007
Since this time I have received a few papers on close subjects to review, which I have done to the best of my ability. I no longer have the time to continue my own studies (they were done during a forced sabbatical between two jobs caused by non-competitiveness clauses in the contract). But looking through the literature on the subject, which seems to grow exponentially (the subject is relatively easy, it is also quite easy to obtain funding - from either physics or soccial sciences departments, and it seems to be one of the fashionable ones), I do see a less trodden path.
Perhaps someone would like to take it?
The idea is based on the reversal of the preferential attachment (`the rich get richer') phenomenon that seems to be found in many networks (from the physical Internet infrastructure, throuth WEB, to scientific collaborations and actor's relationships). In processes that govern the formation of opinions in modern societies, such networks, with influential hubs play an important role. While many early models were based on 1D or 2D close proximity opinion exchanges, in today's small world we interact via networks.
But one phenomenon that is quite crucial for the models dealing with information spread and opinion formation in such networks is that people quite readily cut the links with those who are `not to their liking'. Thus, instead of getting converted to majority opinions, the enclaves of like minded people are formed, cut off from the main network. We all see it in real life, especially in dramatic circumstances of terrorist gropups and their supporters.
Computer modelling would help in establishing the conditions that would diminish this tendency to cut off the links with those who do not share our opinions. Some incentives for keeping the links open (for example through promoting participation in `open' activities and societies, where ideas are exchanged) might be included in the models and studied.
One of my friends has remarked that such computer models in themselves are worthless, as they are purely artificial. Without reference to real life data (experimental or observational) the models are just toys. I agree. But the phenomenon of cutting the links is easy enough to study in reality. One can imagine, for example, monitoring the evolution of links between students on a campus of a University, as they go throught the period of study and later disperse to their jobs. Smaller groups could be used to monitor susceptibility of connections to differences in opinions. So, it does not have to be a purely artificial topic.
Question: does anyone know of works already done along these lines?
Monday, 27 August 2007
The experiment is designed to observe, with as little disturbence as possible, the quantum state of a cavity containing an initially unknown number of photons. The authors describe it as follows:
The irreversible evolution of a microscopic system under measurement is a central feature of quantum theory. From an initial state generally exhibiting quantum uncertainty in the measured observable, the system is projected into a state in which this observable becomes precisely known. Its value is random, with a probability determined by the initial system's state. The evolution induced by measurement (known as 'state collapse') can be progressive, accumulating the effects of elementary state changes. Here we report the observation of such a step-by-step collapse by measuring non-destructively the photon number of a field stored in a cavity. Atoms behaving as microscopic clocks cross the cavity successively. By measuring the light-induced alterations of the clock rate, information is progressively extracted, until the initially uncertain photon number converges to an integer. The suppression of the photon number spread is demonstrated by correlations between repeated measurements. The procedure illustrates all the postulates of quantum measurement (state collapse, statistical results and repeatability) and should facilitate studies of non-classical fields trapped in cavities.
I must admit that my understanding of QM gets more and more inadequate with every such report. For example I do not understand how the `non-destructive' measurements, proposed by the authors really influence the state of the photons. Perhaps they do not change their number.
In this experiment, light is an object of investigation repeatedly interrogated by atoms. Its evolution under continuous non-destructive monitoring is directly accessible to measurement, making real the stochastic trajectories of quantum field Monte Carlo simulations
What is observed is the collapse of the state into one with a defined number of photons. Then the observed number remains constant - until the cavity absorbs one of the photons (on a much larger timescale) and then the measurements show this smaller number.
When I find the words `repeated interrogation' my mind jumps to `continuous measurement' and thus to the Quantum Zeno Effect. Is there any connection?
And one more remark: the short article on the discovery on physicsworld.com has attracted a few comments. By far the most extensive is one by Andrei P. Kirilyuk, who is a champion of "Universal Concept of Complexity by the Dynamic Redundance Paradigm: Causal Randomness, Complete Wave Mechanics, and the Ultimate Unification of Knowledge"
OK, OK - I admit I do not iunderstand him neither. But there are some tell-tale signs of going beyond normal science. Such as mentioning by Kirilyuk that ALL famous science creators, from Descartes and Newton to Einstein and de Broglie were notorious mavericks understood by almost nobody at the time of their discoveries. Which supposedly builds up his credentials. This reminds me of a famous quote:
They laughed at Copernicus.
They laughed at the Wright Brothers.
Yes, well, they also laughed at the Marx Brothers.
Being laughed at does not mean you are right.
Another signal is that Citebase lists 18 articles quoting the Kirilyuk work ... all of them by ... Kirilyuk himself.
Concluding: it is difficult to follow the new developments in physics. A lot depends on the peer review. But - should we have some sort of mechanism for the strange approaches that are out of the mainstream science? Would the famous EPR paper be published today? Especially if the author list did not include Einstein?
Monday, 20 August 2007
Consider the recent paper by P. M. Gleiser, How to become a superhero.
It is devoted to analysis of
collaboration network based on theMarvel Universe comic books. First, we consider the system as a binary network, where two characters are connected if they appear in the same publication. The analysis of degree correlations reveals that, in contrast to most real social networks, the Marvel Universe presents a disassortative mixing on the degree. Then, we use a weight measure to study the system as a weighted network. This allows us to find and characterize well defined communities. Through the analysis of the community structure and the clustering as a function of the degree we show that the network presents a hierarchical structure. Finally, we comment on possible mechanisms responsible for the particular motifs observed.
Hmm. Looking at the results we find all the typical traits of complex networks: power law distributions, hubs, giant components. Funny. Even funnier is the acknowledgements part of the paper:
This work has been supported by grants from CONICET PIP05-5114 (Argentina), ANPCyT PICT03-13893 (Argentina) and ICTP NET-61 (Italy).
It makes me wish to come back to physics. At least in Argentina or Italy...
Sunday, 19 August 2007
Wow! There is thus ONE person in the world who has solved the phenomenon! I went through the paper (in Polish, unfortunately) and it is as far from a solution as possible.
You might say - just another crank, just another esoteric WEB site and publication. But it is not so: the Fizyka i Przyroda is sponsored by the Polish Academy of Sciences Institute of Physics and Institute of Nuclear Studies. It promotes physics competitions. The publications include some very good - though elementary - texts, such as a study of radioactive properties of granite by a high school student. Hardly surprising - but solid piece of intrioductory experimental work.
And I wonder how would the two Institutes feel to be associated with absurd divination studies.
Thursday, 16 August 2007
The Annals of Mathematics, 2nd Ser., Vol. 135, No. 3 (May, 1992), pp. 411-468.
Unfortunately, I know this article only by reflected light, I have not been able to find a freely accessible version. But the ripples and comments it has made are quite numerous, for example Noncollision Singularities: Do Four Bodies Suffice? by Joseph L. Gerver, or EJECTIONS AND CAPTURES BY SOLAR SYSTEMS
What is the point: simple that there is a possibility to cleverly construct a classical (Newtonian) system of five bodies that would result in expulsing one of them to infinity in finite time.
Just pure fun? Not really - it turns out that such result has implicationsas to physical computability, Church-Turing hypothesis, the whole issue of determinism in classical physics. If a body can be expulsed to infinity that by simple time reversal and initial conditions reversal a body can appear in the system and become a part of it, in finite time from infinity! And this means that there might be an essentially unknown and unknowable influence (unknowable because it is infinitely removed) that would act on the system - again, not infinitely far in the future but in finite time.
The paper is quite old (15 years) but compared to the age of Newtonian theory it simply shows that an old dog still has a lot of tricks to learn.
Wednesday, 8 August 2007
Authors: G. Nimtz, A. A. Stahlhofen.
At first I was surprised by the bold statements of the authors:
We demonstrate the quantum mechanical behavior of evanescent modes with digital microwave signals at a macroscopic scale of the order of a meter and show that evanescent modes are well described by virtual photons as predicted by former QED calculations.
Several QED and QM calculations predicted that both evanescent modes and tunneling particles appear to propagate in zero time.
All three properties - the violation of the Einstein energy relation, the zero time spreading, and the non observability of evanescent modes - can be explained by identifying evanescent modes with virtual photons as predicted by several authors, see for instance references. Tunneling and evanescent modes are properly described by quantum mechanics
The (unpublished) paper's form is `unacademic' (i.e. not TeX-ed...) and at first I had doubts as to the credentials of the experiment. Was it another out-of-nowhere Einstein basher? But further search has showed that this is a paper from a long series of publications involving light and microwave signal processing in unusual setups. For references it suffices to search arXiv for au:Nimtz_G
Signaling faster than light speed? Not only possible but observed?
In the light of a recent discussion with a friend (also on this blog...) how would one describe it in a quantum way? or photons as billiard balls way?
Saturday, 4 August 2007
It provides a very clear explanation of the problems that deeply religious society and religious government pose for science education. According to Mansouri `there is little research activity in most areas of physics, and indeed science as a whole, in Iran'. There are maybe 500 PhD educated physicists, and we can only guess how many of them work on the state nuclear programme. Who really believes Iran needs atomic reactor for peacefil energy purposes?. The problem, says Mansouri, is
that Iran, like other Muslim countries, has a very distorted view of what science is – a problem that is rooted in culture and reflected in language. He points out that the Arabic term elm (which is used in almost all Muslim countries) is often taken to mean `science', but this word in fact refers to a deep knowledge of Islam. Indeed ahl e elm means `religious scholar'. Consequently, there is no clear distinction between the meaning and purpose of science and the meaning and purpose of theology.
Iranian universities today do teach science beyond that required for practicing Islam, but Mansouri believes that the legacy of this narrow mindset means that students still learn a very prescribed curriculum by rote, rather than being encouraged to investigate subjects for themselves. [...]
This view of science as a fixed body of knowledge then shapes the way politicians think of science and therefore how they fund it, he says. They view a scientist as an ahl e elm sitting in a small study who will at most need money for new books rather than the far greater resources needed for experiments, lab technicians and computers. The result is that Iran spends only about 0.5% of its gross domestic product on R&D.
It is quite interesting to note that in Poland the R&D spending is about 0.59%, very, very close to Iran. Is there a sign of some deeper similarity? (This compares to USA with 2.76%, Japan with 3.12% or European Union average of 1.93% (data for 2002)).
Monday, 30 July 2007
While the text is, in its political fervour, almost crazy, the ideological blindness that makes it funny lies in small details. The predictions and accusations that almost all were falsified by the passage of time. Two sinful monopolies were mentioned: ATT (that soon was broken down by the very `capitalist government') and IBM (that is now far from a monopoly). An example of inequality of access to science was provided by computers and - that are far removed from ordinary people (just a couple of years before PC was invented). Lasers and other advances in telecommunications were supposed to benefit only the telco company. Who could envision ubiquitous mobile phones...
One thing for certain, a lesson for today: when we are blinded by our own ideas, our own goals it is so easy to miss the reality of the world. When we think about the environment protection, nuclear energy or empowerment of the people it is worth to spend a minute analyzing the fate of the predictions from 1972. Read it, whether you are conservatist or leftist.
PS. In the current Science for the People WEB site many of the problems have changed. Obviously. Some were solved, some - never appeared. But the language has remained truly revolutionary. Point for Dawkins and his meme concept.
Sunday, 22 July 2007
A fine example is provided by
Jacques, V.; Wu, E.; Grosshans, F.; Treussart, F.; Grangier, P.; Aspect, A. & Roch, J. Experimental realization of Wheeler's delayed-choice gedanken experiment. Science, 2007, 315, 966-968
Let me just quote here the conclusions of the paper:
Our realization of Wheeler’s delayed choice Gedanken Experiment demonstrates beyond any doubt that the behavior of the photon in the interferometer depends on the choice of the observable which is measured, even when that choice is made at a position and a time such that it is separated from the entrance of the photon in the interferometer by a space-like interval. In Wheeler’s words, since no signal traveling at a velocity less than that of light can connect these two events, “we have a strange inversion of the normal order of time. We, now, by moving the mirror in or out have an unavoidable effect on what we have a right to say about the already past history of that photon”. Once more, we find that Nature behaves in agreement with the predictions of Quantum Mechanics even in surprising situations where a tension with Relativity seems to. appear
I wonder if this result will hold on repetitions. If yes, then this would reaffirm that we have a lot to understand yet. Especially about time in Quantum Mechanics.
Friday, 20 July 2007
But in general this cycle: `DNA -> RNA -> proteins (complete organism) -> survival and reproduction' is seen as basic and universally true rule.
The discovery of prions (and the subsequent fame of this discovery related to the `mad cow' disease and its relation to Creutzfeldt–Jakob disease in humans has shown that there might be very interesting exceptions. For me it has been exactly this sort of an exception that makes it necessary to have a deeper look at the rule itself.
I have found a short, but very readable article summarizing this subject,
by A. E. Bussard, A scientific revolution?
The prion anomaly may challenge the central dogma of molecular biology, European Molecular Biology Organization Reports, 2005, 6, 691-694
The prion anomaly may challenge the central dogma of molecular biology. The main line of reasoning is not the existence of prions, but the fact that in some cases, relevant information may be stored and transmitted between generations not via DNA or RNA, but via proteins:
Recent discovery of prions as genetic elements that store and transmit information in various organisms, mainly yeast, the fungi Podospora and the sea hare Aplysia.
How is it possible? How does it work? The first evidence came from studying yeast, where evidences of non-mendelian transmission of phenotypic traits were found in late 1960's. As Bussard describes the situation:
Much of this evidence relies on Lindquist’s work on yeast prions. Not only did she show that prion domains in some proteins act as molecular switches that activate or deactivate the protein, she also showed that prions are non-mendelian genetic elements that have an important volutionary role by producing new phenotypes, which are often beneficial. Her work on sup35 revealed that the protein switches to its prion state [PS1+] when the environmental conditions for yeast deteriorate, which decreases translation fidelity and causes the ribosome to read beyond nonsense codons. This in turn enables the expression of formerly silent genes and gene variants, and creates new phenotypes. [PS1+] is passed on to daughter cells in which it self-replicates by imposing its conformation on normal sup35 proteins, until a new phenotype eventually emerges that is better adapted to the new environment. In another elegant experiment, Li and Lindquist showed the generality of this mechanism for controlling protein activity by fusing a yeast prion domain to a rat protein.
For more detailed informations see, for example,
True, H. L. & Lindquist, S. L. A yeast prion provides a mechanism for genetic variation and phenotypic diversity. Nature, 2000, 407, 477-483
Lindquist, S.; Krobitsch, S.; Li, L. & Sondheimer, N. Investigating protein conformation-based inheritance and disease in yeast. Philos Trans R Soc Lond B Biol Sci, 2001, 356, 169-176
When we start to think how really complex the molecular biology of life is, when we take into account the myriad of interactions and influences, then we may begin to believe in wonderful nature of life --- and, at least for me, in the wonderful nature of the study of life, of discovering the links and relationships, between various elements.
There is a very significant message that Bussard emphasizes:
Biologists need to get used to the idea that there is no end in sight when it comes to new insights and scientific breakthroughs; this idea has long been abandoned by physicists who are subject to regular scientific revolutions. I wonder if knowledge is, like the Universe, basically endless and in constant expansion, just as the complexity of life itself is also expanding infinitely.
And this is exactly what I believe myself.
Saturday, 14 July 2007
Yes, I am impressed with the explaining powers of this model, especially with respect to the increasing speed of expansion, but the question is: how did it come about that the scientific community has accepted so fast the Dark Energy explanation (and the Dark Matter as well), without any inklings as to what this Dark Energy is?
More - with the 120 orders of magnitude difference between our possible explanations and the observed value! Yet so many astrophysicists behave in a way as if the problem does not exist.
When the ApPEC and ASPERA, which are consortia of national agencies that pay for astroparticle physics research in Europe, published the report on Status and Perspective of Astroparticle Physics in Europe the collaborators have identified six `basic questions that need to be addressed by the astroparticle community over the next decade:
1. What is the Universe made of?
2. Do protons have a finite life time?
3. What are the properties of neutrinos? What is their role in cosmic evolution?
4. What do neutrinos tell us about the interior of the Sun and the Earth, and about Supernova explosions?
5. What is the origin of cosmic rays ? What is the view of the sky at extreme energies ?
6. Can we detect gravitational waves ? What will they tell us about violent cosmic processes and about the nature of gravity?
As for the question 1, the particular focus is on Dark Matter, which is described as
`Dark Matter turns out to be the majority
component of cosmic matter. It holds the Universe together through the gravitational force but neither emits nor absorbs light. Dark Matter (including a small admixture of massive neutrinos) has likely played a central role in the formation of large scale structures in the Universe. Its exact nature has yet to be determined. The discovery of new types of particles which may comprise the dark matter would confirm a key element of the Universe as we understand it today.'
But for the Dark Energy, the report states only that `The nature of dark energy remains a mystery, probably intimately connected with the fundamental question of the cosmological constant problem.'
The plans are - one should notice - made by astroparticle
rather than astrophysics organizations. So, perhaps it is not too surprising that in the list of planned and supported experiments the authors state:
`It is this part of the search for Dark Matter that we assign to the field of astroparticle physics. Dark Energy has a similar density to dark matter; unveiling of its nature would have profound impact on astroparticle physics. On the other hand, current projects exclusively rely on tools of astronomy; therefore we express strong support for dark energy projects but leave detailed recommendations to the strategic planning of astronomy roadmaps.'
Despite these remarks (perhaps written slightly tongue-in-cheek), the report is very clear and can be a good initial review on the current state of knowledge. Recommended reading!
Saturday, 7 July 2007
"challenging the traditional view of our genetic blueprint as a tidy collection of independent genes, pointing instead to a complex network in which genes, along with regulatory elements and other types of DNA sequences that do not code for proteins, interact in overlapping ways not yet fully understood."While this may be interesting scientific result, the IHT article aptly points that the one gene-one effect (one protein) stance led the way to gene patents: patents based on the assumption that industrial gene (made by splicing techniques) would have defined, owned, tracked and uniform effect, moreover with ability to sell and retract. But when the genes really work in a complex network how can we be sure which effects is due to what? Biologically. And legally too: if one patented gene depends on other natural or patented ones? Especially in ways we do not understand? How to pay royalties? How to split an eventual responsibility for damages? How to ensure that effects are uniform and as promised?
Well, there is much more to the subject. I was never in favour of granting patents on genes, but the US law has allowed the companies to do it. Without, as it turns out, proper understanding of the separability of the "inventions". And because we lack real understanding of the most of the genome (the original paper that has prompted the IHT article speaks about 1% of the genome) we are blundering blindly -- and this is never a good thing.
For more information see:
The ENCODE Project Consortium, Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project, Nature 447
Mark B. Gerstein at al. What is a gene, post-ENCODE? History and updated definition, Genome Res. 2007 17: 669-681
Thomas R. Gingeras Origin of phenotypes: Genes and transcripts Genome Res. 2007 17: 682-690
Saturday, 23 June 2007
And this was even stranger. The news came from Causation: International Journal Of Science.
It claims to be peer reviewed scientific electronic journal.
The front page boasts exploding graphics with a title Bell's
theorem …refuted! in one inch letters .
Inside one finds two (!) peer reviewed papers by Ilija Barukčić:
Bell's theorem. A fallacy of the excluded
middle and Helicobacter pylori: the cause of human gastric cancer. Perhaps not surprisingly, the Editorial Board consists of — you guessed: Ilija Barukčić!
But I was curious to see, if indeed, this recent work has resulted in a refutation of Bell theorem (making a lot of my own effort to prove it for myself - succesfully - useless and wrong).
I dug into the first article (which was quite cumbersome — as the text is really overfull of mathematical formulae, repeated endlessly). The conclusions of the author are really bold:
As proofed above, Bell's theorem is fallacious because of specifically logical reasons. The logic of Bell's theorem is not sound. Bell's theorem contradicts classical logic, it is based upon a fallacy. In so far either
Bell's theorem is valid or classical logic is valid but not both. Bell's theorem is not compatible with the law of the excluded middle, it is a fallacy of the excluded middle. Bell has committed the fallacy of the excluded middle, commonly referred to as a false dilemma. This logical fallacy is sometimes known also as a false correlative, an either/or fallacy, a bifurcation or as black and white thinking. Bell's formalisation of local realism, his starting point, is incorrect and is based on a logical contradiction. Bell's theorem, as a false dilemma fallacy, refers to a misuse of the law of the excluded middle. Bell has misapplied the law of excluded middle at an maximum. An extreme simplification, a wishful thinking and a misapplication of the law of the excluded middle is the foundation of Bell's theorem. In so far,
Bell's theorem is the most profound logical fallacy of science.
Further, Bell's theorem is the definite and best proof known, that correlation analysis contradicts Quantum mechanics and Relativity Theory, that it is a useless and dangerous statistical machinery. Thus, as proofed above, Bell's theorem is refuted definitely, the book on Bell's theorem is completely losed.
Finally I got to the essence of the proof of refutation of Bells theorem. It may be found first on page 18 of the paper. I'll try to repeat here the most important step, taking the liberty of radically simplifying the notation. I ask the Reader to excuse the use of formulae here, but I think it is such a mathematical joke, that should be shared.
The Bell's theorem is given by Barukčić as:
|( 1 − ( (1 − (At ) )· ( 1 − (Not At )) ) ) ≥( Not At) + ( Not Ct ) · ( ( At ) − (Bt ) ). (1)|
Let's simplify it by denoting the left and right side of equation:
( 1 − ( (1 − (At ) )· ( 1 − (Not At )) ) ) = L (2)
( Not At) + ( Not Ct ) · ( ( At ) − (Bt ) ) = R (3)
This really helps, as there are really no operations on L and R in the `proofs'. Thus what we have is the inequality
L≥ R (4)
What Barukčić aims at is a proof by reductio ad absurdum, i.e., he assumes the theorem to be true, and looks for logical discrepancies. There are four `proofs' and I'll present the first of them, quoting the author as much as it is possible (some substitutions and cuts are put in here, the Reader interested in details can check the original paper).
The term R can take the values 0 or 1. In so far, let us assume, that R = 0. We
obtain equation L ≥ (R=0).
It is generally accepted, that a ≥ b means that a = b or a > b, both are equally allowed and
possible, if the inequality is true. In so far, Eq. 1 is true, if L=0.
Eq. 1 is equally true if L > 0. In this case, let us assume1, that
L = (R=0),
which satisfies Bell's inequality. On the other hand, Bell is respecting classical logic and thus the law of the excluded middle. The law of excluded middle in classical bivalent logic must yield L=1. Bell's inequality is respecting this law. We obtain
(L=1) = (R=0) (5)
Bell's inequality leads to a logical contradiction, it not true that 1 = 0. Therefore, our original assumption, that Bell's theorem is correct is false.
This was the first of the four `proofs'. Of course, if one assumes to use equality and to use R=0 condition then one gets contradiction. But it is not the Bell theorem that `contradicts classical logic and leads to a logical contradiction' — it is the author himself.
- Emphasis mine. There is no emphasis on this assumption in the original paper...
This document was translated from LATEX by
Wednesday, 20 June 2007
Tuesday, 19 June 2007
"The Munduruku language uses the count words for 1, 2 and 3 consistently, and 4 and 5 somewhat inconsistently. The Piraha do not even use the words for 1 and 2 consistently. How would members of these groups perform on various non-verbal tasks involving numerosity?
The amazing result was that both groups succeeded on non-verbal number tasks that used displays representing values (in one study) as large as 80."
I recall the Piraha story because of totally unrelated paper by
Irene M. Pepperberg called Grey parrot numerical competence: a review .
The abstract of the paper states:
"The extent to which humans and nonhumans share numerical competency is a matter of debate. Some researchers argue that nonhumans, lacking human language, possess only a simple understanding of small quantities, generally less than four. Animals that have, however, received some training in human communication systems might demonstrate abilities intermediate between those of untrained nonhumans and humans. Here I review data for a Grey parrot (Psittacus erithacus) that has been shown to quantify sets of up to and including six items (including heterogeneous subsets) using vocal English labels, to comprehend these labels fully, and to have a zero-like concept. Recent research demonstrates that he can also sum small quantities. His success shows that he understands number symbols as abstract representations of real-world collections, and that his sense of number compares favorably to that of chimpanzees and young human children."
Does make you think, doesn't it?
Monday, 18 June 2007
to the question of the `speed of quantum changes'. While the classical discussions of nonlocality
in Quantum Mechanics (QM) and consequences of Bell's Theorem are widely published,
there are some other situations where nonlocality is rather hard to grok.
Consider a hydrogen atom in excited state. The electron wavefunction has some specific form, extending via exponentially vanishing factor, to infinity.
Now, when the atom emits a photon (preferably for this analysis in spontaneous emission)
the wavefunction changes.
Question: does the wavefunction change at the same moment in the whole space?
Or, as Zeh suggests, is there a `wave' of changing wavefunction, spreading our from the atom?
Anyone knows any solution / references to this problem?
Sunday, 17 June 2007
by Bharat Ratra and Michael S. Vogeley, which has just appeared in arXiv has
not only a very comprehensive list of sources, but readable introductions into what is what.
The only negative remark I can find, speaking from the point of view of an amateur follower of science, is that the publication is too conservative with respect to `official channels' of science dissemination. Yes, I know that there are copyrights and that scientific journals usually do not have free access. And it is `proper' to give references to peer-reviewed journal publications.
But we do live in the Internet age, and astrophysics is, in fact, one of the fields where
e-pre-print is very much alive. As the Resource Letter publication shows itself. So, I find it a bit disappointing that there are no links to pre-prints in the publication.
It would make the life so much easier...
A word of warning to the Reader: do not expect expert knowledge here. I have long ago ceased to be an expert, even in my own discipline, solid state physics. I am, what one might call, a curious wanderer, going randomly through many disciplines in the realm of Science. I hope that the things that I discover might be interesting for other people. If so - I look forward to any comments.