Sunday 26 April 2009

SkyNet?

As the new terminator movie approaches the notion of people building Skynet Artificial Intelligence seems a dark prospect. Yet there is something that looks just like it...

At http://www.intelligencerealm.com/aisystem we might find an attempt to build AI in the free multiple client mode, using home computers a la SETI.

The aim:
This project uses Internet-connected computers in order to leverage the computing power of many machines. You can participate by downloading and running a free program on your computer. You will need to download the BOINC client manager from the BOINC web site. If you have any issues with the BOINC software please address them to their network of volunteers on the help page. We will post the source code on SourceForge.net project site. We expect to launch new versions often so please bear with us. Building a neural network simulator requires much more than raw computing power and each released version will incrementally increase the system's features.


How they do it?
Results
To date we have simulated 709,358,333,333 neurons. The human brain has an estimated 100 billion neurons.
Computing Information
The neural network simulator is an application that simulates neurons. Each downloaded work unit generates 500,000 biophysical neurons. Because the simulator is in an initial phase and we have very few cellular models implemented, we can only use it to test for simulations capacity. We have completed the first phase of the project, to simulate over 100 billion neurons. The second largest brain simulation has been done on a cluster of 27 machines, with 100 billion neurons simulated over a period of 50 days. While it was a very interesting experiment which pushed the frontier further on it was a partial simulation only, in the sense that many of the required components were not implemented due to hardware constraints.

The neurons were created, simulated and then destroyed in memory, without any data being stored. Based on their results the estimate for full brain simulations was calculated to be the year 2016; we would like to prove otherwise. From a practical point of view it didn't advance the knowledge further on and that's why we would like to continue along this line of thought and bridge these results with some practical data. The problem of storage and computing power is esential for large scale brain simulations because without them we can't plan and estimate these requirements. Without planning there is also no clear understanding as to what is needed in order to do that. As we advance with the simulation and more and more neurons get simulated, we should be able to make increasingly precise estimations on storage, number of computers required, duration, bandwidth and other factors. Regardless of the fact that at this stage our simulation is not precise and it lacks in many aspects, this is what we want to achieve with your help.

There is also the added benefit that once we will publish these results and the public at large would see that the capacity to simulate the entire brain is considerably higher than previously thought, a large stumbling block will be removed from the path of artificial intelligence.


Is this dangerous? Well, the FAQ confirm the danger
If the system will eventually be smarter than you, its creator, wouldn't that pose a risk?
It sure does. We understand the negative and positive implications of building an Artificial Intelligence system. That's why we have already restricted access and we will implement multiple levels of control and monitoring.


Nice. Fine. OK.

But... If the system turns out truly alien, and truly smarter than us - then what kind of security will be sufficient? Especially as the beast runs not in some enclosed laboratory, but in the wilderness of world wide network.

What worries me particularly is how the system will get the information:
Knowledge Acquisition:
The knowledge acquisition module is used for retrieving and defining the information that will form the future memories of the system. We are using a robot (i.e. web bot) to extract information from the Internet.


Hmmm... if the system learns about the RWOT (Real World Out There) from the Internet, it is going to be very, very confused.

The only hope is that the project shall fail.

Peer review in practice

It is worse than I thought.
submission date: March 6th
sent to referees: March 24 (why wait almost three weeks?)
on April 14h reminders sent to referees, because they have not responded.

My own average as a referee - so far - was less than one week for reviews (including those where I had to do totally new calculations to prove authors wrong). But I am an amateur.

Judging by some recent publications titled "Are we training pitbulls for peer review" (or something like this) there is a growing worry about the cornerstone of scientific credibility - the belief that the published work IS checked and may be safely used by others.

We'll soon become much like pop artists: publish (rubbish) or perish. And science will become a beuty contest for funding. Dark future.

Wednesday 8 April 2009

Amateur Scientist - the story continues

After the first step of publishing in a little known open access electronic journal I have sent some work to a leading physics paper - without any affiliation.

The paper is under review - but I am glad to announce that the lack of affiliation has not been any problem for the Editors.

So, I guess that I shall wait now for the Referees' opinions.