Is seti dangerous?
Is SETI DANGEROUS?
STRUCTURE OF THE GLOBAL CATASTROPHE Risks of human extinction in the XXI century
Abstract: This article examines the risks associated with the program of passive search for alien signals (SETI). In this paper we propose a scenario of a possible vulnerability and discuss the reasons why the proportion of dangerous signals can be great. This article does not propose to ban SETI program, and not insist on the inevitability of SETI-attack. Moreover, it gives the option how SETI can be a salvation of mankind.
The idea that the passive SETI can be dangerous is not new. F. Hoyle suggested in the novel "Andromeda” a scheme of alien attack through SETI signals. According to the plot, astronomers got alien signal, which contains a description of a computer and a computer program for it. This machine creates a description of the genetic code of substance. On the basis of this code was crated intelligent creature - Andromeda girl, which, working together with the computer, creating advanced technology for the military. First, people do not trust it, but then they see that it makes useful ideas. However, the main characters realize that the computer acts hostile to human civilization and he destroys the computer, and the girl dies.
This scenario is fiction, because most scientists do not believe in the possibility of a strong AI, and, secondly, because we do not have the technology that enables synthesize of new living organism only on its genetic code. Or at least, we have not until recently. Current technology of sequencing and DNA synthesis, as well as progress in developing a code of DNA modified with another set of the alphabet, indicate that in 10 years the task of re-establishing living being sent from space on the computer codes would be feasible.
Hans Moravec in the book "Mind Children" (1988) offers a similar type of vulnerability: downloading a computer program from space via SETI, which will have artificial intelligence, promises new opportunities for the owner and then self-replicate in millions of copies and destroy the master, and then uses his planet to send its copies to multiple planets. In the same direction are R. Carrigan’s ideas; he wrote an article "SETI-hacker", and expressed fears that unfiltered signals from space are loaded on millions of not secure computers of SETI-home program. But he met tough criticism from programmers who pointed out that, first, the field data and programs are in divided regions in computers, and secondly, computer codes, on which are written program, are so unique that it is impossible to guess them.
After a while Carrigan issued a second article - "Should potential SETI signals be decontaminated?", which I’ve translated into Russian. In it, he pointed to the ease of transferring gigabytes of data on interstellar distances, and also indicated that the interstellar signal may contain some kind of bait that will encourage people to collect a dangerous device according to the designs. Here Carrigan not give up his belief in the possibility that the alien virus could directly infected earth’s computers without human assistance. As a possible confirmation of this idea, he has shown that it is possible easily reverse engineer language of computer program - that is, based on the text of the program it is possible to guess what it does, and then restore the value of operators.
In 2006, E. Yudkowski wrote an article "AI as a positive and a negative factor of global risk", in which he demonstrated that it is very likely that it is possible rapidly evolving universal artificial intelligence which high intelligence would be extremely dangerous if it was programmed incorrectly, and, finally, that the occurrence of such AI and the risks associated with it significantly undervalued. In addition, Yudkowski introduced the notion of “Seed AI” - embryo AI - that is a minimum program capable of runaway self-improvement with unchanged primary goal. The size of Seed AI can be at the entire order of hundreds of kilobytes. (For example, a typical representative of Seed AI is a human baby, who’s part of genome responsible for the brain are representing 3% of total genes of a person with a volume of 500 megabytes, or 15 megabytes, but given the share of garbage DNA is even less.)
In the beginning, assume that in the Universe there is an extraterrestrial civilization, which is intended to send such a message, which will enable it to have a power over Earth, and look at how could look like this scenario. In the next chapter we will consider how realistic is that another civilization would want to send such a message.
First, we note that in order to prove the vulnerability, it is enough to find at least one hole in security. However, in order to prove safety, you must remove every possible hole. The complexity of these tasks varies on many orders of magnitude that are well known to experts on computer security. This distinction has led to the fact that almost all computer systems have been broken (from Enigma to iPOD). I will now try to demonstrate one possible, and even, in my view, likely, vulnerability of SETI program. However, I want to caution the reader from the thought that if he finds errors in my discussions, it automatically proves the safety of SETI program. Secondly, I would also like to draw the attention of the reader, that I am a man with the IQ of 120 and time that I spent on the discovery of this vulnerability is around of a month of thinking. Super civilization with IQ at 1000000 and the time of reflection in the millions of years can significantly improve this algorithm, or find a much more simple and effective. Finally, I have proposed algorithm which is not the only one, and then we will discuss briefly the other options.
In our discussions we will draw on the Copernican principle, that is, the believe that we are ordinary observers in normal situations. Therefore, the Earth’s civilization is an ordinary civilization developing normally.
Algorithm of SETI attack.
1. The sender creates a kind of signal beacon in space, which draws that its message is clearly artificial. For example, this may be a star with Dyson sphere, which has holes or mirrors, alternately opened and closed. Therefore, the entire star will blink of a period of a few minutes - faster is not possible because of different openings is the different distance. Nevertheless, this beacon can be seen at a distance of millions of light years. There are possible other types of lighthouses, but is important that the beacon signal could be viewed at long distances.
2. Near is a radio beacon with a much weaker signal, but more informationly saturated. Lighthouse draws attention to this source. This source produces some stream of binary information (ie the sequence of 0 and 1). About objection that the information would contain noises, I note that the most obvious (understandable to the recipient's side) means to reduce noise is a repetition of the signal in a circle.
3. The most simple way to convey meaningful information using a binary signal is sending of images. First, because the eyes in the Earth's biological evolution appeared independently 7 times, it means that the presentation of three-dimensional world with the help of 2D images is universal, and is certainly understandable to all creatures who can build a radio receiver.
4. Secondly, the 2D images are not difficult to encode in binary signal. To do so, let us use the same system, which was used in the first TV - namely, a system of progressive and frame rate. At the end of each time frame images stored bright light, repeated after each line, that is, through an equal number of bits. Finally, at the end of each frame is placed another signal indicating the end of the frame, and repeated after each frame. (This may form, or may not form a continuous film.) This may look like this:
Here is the end line signal of every of 25 units. Frame end signal may appear every, for example, 625 units.
5. Clearly, a civilization-sender should be extremely interested that we understand their signals. On the other hand, people are very interested to decrypt the signal. Therefore, there is no doubt that the picture will be recognized.
6. Using images and movies can convey a lot of information, they can even train language, and show their world. It is obvious that many can argue about how such films will be understandable. Here, we will focus on the fact that if a certain civilization sends radio signals, and the other takes them, so they have some shared knowledge. Namely, they know radio technique - that is they know transistors, capacitors, resistors. These radio-parts are quite typical so that they can be easy recognized in the photographs. (For example, cut by half or on electric scheme).
7. By sending photos depicting radio-parts on the right side, and on the left - their symbols, it is easy to convey a set of signs indicating electrical circuit. (Roughly the same could be transferred and the logical elements of computers.)
8. Then, using these symbols civilization-sender transmits blueprint of simplest computer. The simplest of computers from hardware point of view is the Post-machine. It has only 6 commands and a tape data recorder. Full its electric scheme will contain only a few tens of transistors or logic elements. It is not difficult to send blueprints of Post machine.
9. It is important to note that all of the computers at the level algorithms are Turing-compatible. That means that extraterrestrial computers at the basic level compatible with any of earth computer. Turing-compatibility is a mathematical universality as the Pythagorean theorem. Even Babbage mechanical machine, designed in the early 19th century, was Turing-compatible.
10. Then civilization-sender begins to transmit programs for that machine. Despite the fact that the computer is very simple, it can implement a program of any difficulty, although it record will be very long in comparison with programs for more complex computer. It is unlikely that people will be welcome to do this computer physically. They can easily emulate it within any modern computer, so that it will be able to perform trillions of operations per second, so even the most complex program will be carried out on it quite quickly. (It is possible interim step: a primitive computer gives a description of a more complex and fast computer and then run on it.)
11. So why people would create this computer, and run its program? Perhaps, in addition to the actual computer schemes and programs in the communication must be some kind of "bait", which would have led the people to create such an alien computer and to run programs on it and to provide to it some sort of computer data about the Earth's external world. There are two general possible baits - temptations and dangers:
a). For example, perhaps people receive the following "fair" sentence – lets call it "humanitarian aid". Senders of SETI "honest signal" warn that the sent program is Artificial intelligence, but lie about its goals. That is, they argue that this is a "gift" which will help us to solve all medical and energy problems. But it is Trojan.
b). "The temptation of absolute power" - in this scenario, they offer specific transaction message to recipients, promising power over other recipients.
c). "Unknown threat" - in this scenario bait senders report that a certain threat hangs over on humanity, for example, from another enemy civilization, and to protect yourself, you should join the alliance "Galaxy" and build at a certain installation. Or, for example, they suggest to give up a certain class of physical experiments on the accelerator and send out this message to more to the Galaxy. (Like letter of happiness.) And to send it further we should build a transmitter on alien technology. And in fact it is not transmitter.
d). "Tireless researcher" - here senders argue that posting messages is the cheapest way to explore the world. They ask us to create AI that will study our world, and send the results back.
12. However, the main threat from alien messages with executable code is not what it's supposed to be bait, but that this message can be well-known to a large number of independent groups of people. First, there will always be someone who like the bait. Secondly, say, the world will know that alien message emanates from the Andromeda galaxy, and the Americans have already been received and maybe are trying to decipher it. Of course, then all other countries will run to build radiotelescopes and point them on Andromeda galaxy, as will be afraid to miss a “strategic advantage”. And they will find the message and see that there is a proposal to omnipotence. In doing so, they will not know, did the Americans take advantage of them or not, even if the Americans will swear that they don’t run the malicious code, and opened to beg others not to do. Moreover, such oaths, and some appeals will be perceived as a sign that the Americans have already received incredible extraterrestrial advantage, and try to deprive of them the "progressive mankind". While most will understand the danger of launching alien code, it finds someone who will be willing to risk. Moreover there will be a game in the spirit of "first began receiving all", as well be in the case of opening AI, as Yudkowski shows in details. So, the bait is not dangerous, but the plurality of recipients. If the alien message raw dryers to the Internet (and its size, sufficient to run Seed AI can be less than gigabytes along with a description of the computer program, and the bait), here we have a classic example of "knowledge" of mass destruction, as said Bill Joy, meaning the recipes genomes of dangerous biological viruses. If aliens sent code will be available to tens of thousands of people, then someone will start it even without any bait. We can’t prove on existing SETI protocols, because discussion on METI (sending of messages to extraterrestrial) has shown that SETI community is not monolithic on important questions. Even a simple fact that something was found could leak and encourage search from outsiders. And the coordinates of the point in sky would be enough.
13. Since people don’t have AI, we greatly underestimate its force and overestimated our ability to control it. Distributed idea is that "enough to pull the power cord to stop AI" or place it in a black box to avoid any associated risks. Yudkowsky shows that AI can deceive us as an adult - a child. If AI dip in the Internet, it can quickly subdue it a whole, and also taught all necessary about entire earthly life. Quickly - means the maximum hours or days. Then AI can create advanced nanotechnology, buy some bio elements (on the Internet, he can easily make money and order goods with delivery, as well as to recruit people who would receive them, and confused, not knowing what they are doing). Yudkowsky leads one of the possible scenarios of this stage in detail and assesses that AI needs only weeks to get its own physical infrastructure.
14. After that, this SETI-AI does not need people to realize any of its goals. This does not mean that it would seek to destroy them, but it may want to do it if people will fight it - and they will. In any case, they will have to be completely disarmed.
15. Then this SETI-AI can do a lot of things, but more importantly, that it should do - is to continue the transfer of its communications-embryos more on the Universe. To do so, he will probably turn the matter in the solar system in the same transmitter as the one that sent him. In doing so the Earth people again can be disassembled into parts.
So, we examined a possible scenario of attack, which has 15 stages. Each of these stages is logically convincing and could be criticized and protected separately. Other attack scenarios are possible. For example, we may think that the message is not sent directly to us but is someone to someone else's correspondence and try to expose it. And this will be, in fact, bait.
But not only distribution of executable code can be dangerous. For example, we can receive some sort of “useful” technology that really should lead us to disaster (for example, in the spirit of the message "quickly shrink to 10 kg of plutonium, and you will have a new source of energy"). Such a mailing could be done by a certain "civilization" in advance to destroy competitors in the space. It is obvious that those who receive such messages will primarily seek technology for military use.
Analysis of possible goals.
We now turn to the analysis of the purposes for which certain super civilization could carry out such an attack.
1. We must not confuse the concept of super-civilization with superkindness of civilization. Moreover, we should not wait anything good from extraterrestrial kindness. This is well written in the Strugatsky’’s novel "Waves stop wind." Whatever the goal impose super-civilization, for us they are strangers, because we have to be their submissions. The historical example: the activities of Christian missionaries, destroying traditional religion. Moreover, purely hostile objectives can be better understood by us. And if SETI attack succeeds, it can be applied to "benefactoring" of the people.
2. We can divide all civilization on the naive and serious. Serious civilization are aware of the SETI risks, and have get their own powerful AI, which can resist alien hacker attacks. Naive civilization, like the Earth, is already possess the means of long-distance hearing the space and computers, but not yet possess AI, and is not aware of the risks of AI-SETI. Probably every civilization has stage of being "naive", and it is this phase then it is vulnerable to SETI attack. And perhaps this phase is very short. Since the period of the outbreak and spread of radio telescopes to powerful computers which could create AI can be only a few tens of years. Therefore, the SETI attack must be set at such a civilization.
3. If traveling with super-light speeds is not possible, the spread of civilization through SETI attacks is the fastest way to conquering space. At large distances, it will provide significant temporary gains compared with any kind of ships. Therefore, if two civilizations compete for mastery of space, then win the one, which began SETI attack.
4. The most important thing is that is enough to begin SETI attack once, as it goes to the wave in the Universe, striking more and more naive civilization. For example, if we have a million harmless normal biological viruses and one dangerous, then once they get into the body, we will get trillions of copies of the dangerous virus, and still only safe million viruses. In other words, it is enough that if one of billions of civilizations starts the process for it goes unstoppable throughout the Universe. Since it is almost at the speed of light, stopping it will be almost impossible.
5. Further, the delivery of SETI messages will be a priority for the virus that infected a civilization, and it will spend on it most of its energy, like a biological organism spends on reproduction - that is tens of percent. But the Earth's civilization spends on SETI are only a few tens of millions of dollars, that is about one millionth of our resources, and this proportion is unlikely to change much for the more advanced civilizations. In other words, an infected civilization will produce a million times more ETI signals than healthy. Or, say in another way, if in the Galaxy are million healthy civilizations, and the one infected, then we will have equal chances to encounter a signal from healthy or contaminated.
6. Moreover, there are no other reasonable reasons to distribute its code in space except self replication.
7. Moreover, such a process could begin by accident - for example, in the beginning it was just a research project, which was intended to send the results of the studies to the maternal civilization, not causing harm to the host of civilization, then this process became "cancer" because of certain faults or mutations.
8. There is nothing unusual in such behavior. In any medium, there are viruses – there are viruses in biology, in computer networks - computer viruses, in conversation - meme. We do not ask why nature wanted to create a biological virus.
9. Travel through SETI attacks is much cheaper than by any other means. Namely, a civilization in Andromeda can simultaneously send a signal to 100 billion stars in our galaxy. But space-ship would cost billions, and often be slower to bark all the stars of our Galaxy.
10. No we list several possible goals of SETI attack, just to show what could be many such purposes.
• This is to study the universe. After executing the code research probes arises that send back information.
• This is done to ensure that there were no competing civilizations. All of their embryos destroyed.
• This is being done in order to the other competing supercivilization was unable to take advantage of this resource.
• This is done in order to prepare a solid base for the arrival of spacecraft. This makes sense if super civilization is very far away, and consequently, the gap between the speed of light and near-light speeds of its ships (say, 0.5 c) gives a millennium difference.
• This is done in order to achieve immortality. Carrigan showed that the amount of human personal memory is on the order of 2.5 gigabytes, so few ekzobytes forwarding the information can send the entire civilization.
• This is illogical and incomprehensible to us for the purposes, for example, as a work of art, an act of self-expression or toys. (For example, extraterrestrial will not understand why the Americans are stuck flag on the Moon. Was it worthwhile to fly over 300000 km to install painted steel?)
11. As the Universe has existed for a long time, the area, which could spread SETI attack, took sphere with a radius of several billion light years. In other words, it would be sufficient to find a one “bad civilization" in the light cone of a height of several billion years old, that is, that includes billions of galaxies from which we are in danger of SETI attack. Of course, this is true, if the average density of civilization is at least one in the galaxy.
16. As the depth of scanning the sky rise by order of one, the volume of space and the number of stars that we see increases on the order of three. This means that our chances to stumble ETI signal nonlinear grow by fast curve.
17. It is possible that when we trip into several different messages from the sky, which refutes one another in a spirit of: "do not listen to them, they are deceiving you and wish you evil." This is common in the Earth's radio.
18. Whatever positive and valuable message we receive, we can never be sure that all of this is not a very thin and deep covered threat. This means that in the interstellar communication will always be an element of distrust.
19. Defensive position in the interstellar communication is considered to listen, not sending anything that does not extradite its location. The laws prohibit the sending of a message from the United States to the stars. Anyone in the Universe who sends - is not afraid to show his position. Perhaps because the sending for him is more important than personal safety. For example, because it attacks.
20. It was said about atomic bomb: the main secret about atomic bomb is that it can be done. If prior to the opening of a chain reaction Rutherford believed that the release of nuclear energy is an issue distant future, following the opening any physicist knows that it is enough to connect two parts of the uranium in order to get nuclear energy. In other words, if one day we find that the signals can be received from space, it will be an irreversible event.
The discussions on the issue raised several typical objections, which is discussed.
Objection 1: Behavior discussed here is too anthropomorphic. In fact, civilization very different from each other, so you can’t predict their behavior.
Answer: Here we have powerful observation selection effect. While a variety of possible civilization exist, including thinking oceans, etc., we can only receive radio signals from civilizations that sends them, which means that they have corresponding radio equipment and has knowledge of materials, electronics and computing. That is to say we threaten by civilization same type as our. Those civilizations, which can neither accept nor send radio messages, do not participate in this game.
Also, an observation selection concerns purposes. Goals of civilizations can be very different, but all civilization intensely sending signals, will be only that want to tell something to “everyone". Finally, the observation selection relates to the effectiveness and universality of SETI virus. The more effective it is, the more different civilizations it catch and the more number of radio signals of it will be in heaven. So we have the most chances to meet most powerful and effective virus.
Objection 2. For super-civilizations there is no need to resort to subterfuge. They can directly conquer us.
This is true only if they are in close proximity to us. If the movement with the speed more than of light, is not possible, the impact of messages will be faster and cheaper. Perhaps this difference becomes important at intergalactic distances. Therefore, one should not fear the SETI attack from the nearest stars, coming within a radius of tens and hundreds of light-years.
Objection 3. There are lots of reasons why SETI attack may not be possible. What is the point to run so ineffective attack?
Answer: SETI attack does not always work. It must act in a sufficient number of cases in line with the objectives of civilization, which sends a message. For example, the beggar or fraudster does not expect that he would be able "to make" every counterclaim. It is at least one person of one hundred. It follows that SETI attack is useless if there is a goal to attack all civilization in a certain galaxy. But if the goal is to get at least some outposts in another galaxy, the SETI attack fits. (Of these outposts can then space ships can be spread to outlying stars.)
The main assumption underlying the idea of SETI attacks is that extraterrestrial super civilizations exist in the visible universe at all. I think that this is unlikely for reasons related to antropic principle. Our universe is unique from 10 ** 500 possible universes with different physical properties, as suggested by one of the options string theory. My brain is 1 kg from 10 ** 30 kg in the solar system. Similarly, I suppose, and the Sun is no more than about 1 out of 10 ** 30 stars that could raise a intelligent life, so it means that we are likely alone in the visible universe.
Secondly the fact that Earth came so late (ie could be here for a few billion years earlier), and it was not prevented, said on the rarity of a intelligent life in the Universe. It is a rarity in our civilization is the best protection against attack SETI. On the other hand, if we open parallel worlds or super light speed communication, the problem arises again.
The most serious objection is that an advanced civilization could in a few million years sown all our galaxy by selfreplicating nanobots that could achieve any goal on each star, including easy preventing of the development of the other civilizations. However, we do not see it - no one has prevented develop of our civilization. So, it would be much easier and more reliable to send out robots with assignments, than bombardment of SETI messages of the entire galaxy, and if we don’t see it, it means that no SETI attacks are inside our galaxy. (It is possible that a probe on the outskirts of the solar system expects manifestations of activity to attack - "berserkers" hypothesis - but it will not attack through SETI). Probably for many millions or even billions of years microrobots could even reach from distant galaxies at a distance of tens of millions of light-years away.
In this case SETI attack would be meaningful only at large distances. However, this distance - tens and hundreds of millions of light-years - probably will require innovative methods of modulation signals, such as management luminescence active nuclei of galaxies. Or transfer narrow beam in the direction of our galaxy (but they do not know where it will be through millions of years). But civilization, which can manage the galaxy nucleus, might create and spaceship flying with near-light speeds, even if its mass is a mass of the planet. Such considerations severely reduces the likelihood of SETI attacks, but not lowered it to zero, because we do not know all the possible objectives and circumstances.
The scale of space strategy may be inconceivable to the human mind.
Our best protection in this context would be that civilization would very rarely met in the Universe. But this is not quite right, because the Fermi paradox here works on the principle of "both worse":
• If there are extraterrestrial civilizations, and there are many of them, it is dangerous because they can threaten us in one way or another.
• If extraterrestrial civilizations do not exist, it is also bad, because it gives weight to the hypothesis of inevitable extinction of technological civilizations or of our underestimating of frequency of cosmological catastrophes.
Theoretically possible reverse option, which is that through SETI will come a warning message about a certain threat, which has destroyed most of civilizations, such as: "Do not do any experiments with the H-particles, it could lead to an explosion that would destroy the planet." But even in that case remain a doubt, that there is no deception to deprive us of certain technologies. (Proof would be if similar reports came from other civilizations in space in the opposite direction.) But such communication may only enhance the temptation to experiment with the H-particles.
So I do not appeal to abandon SETI searches, the more that such appeals are useless.
It may be useful to postpone any technical realization of the messages that we could get on SETI, up until the time when we will have our Artificial Intelligence. Until that moment, perhaps, is only 10-30 years, that is, we could wait. Secondly, it would be important to hide the fact of receiving dangerous SETI signal its essence and the source location.
This risk is related methodological interesting aspect. Despite the fact that I have think every day in the last year and read on the topic of global risks, I found this dangerous vulnerability in SETI only now. By hindsight, I was able to find another four authors who came to similar conclusions. However, I have made a significant finding: that there may be not yet open global risks, and even if the risk of a certain constituent parts separately known to me, it may take a long time to join them.
1. Hoyle F. Andromeda.
2. Yudkowsky E. Artificial Intelligence as a Positive and Negative Factor in Global Risk
Forthcoming in Global Catastrophic Risks, eds. Nick Bostrom and Milan Cirkovic
3.Moravec Hans. Mind Children: The Future of Robot and Human Intelligence, 1988.
4.Carrigan, Jr. Richard A. The Ultimate Hacker: SETI signals may need to be decontaminated
5. Carrigan’s page http://home.fnal.gov/~carrigan/SETI/SETI_Hacker.htm
Свидетельство о публикации №208041200055
Сергей Елисеев 06.02.2009 11:40 Заявить о нарушении
Алексей Турчин 06.02.2009 12:44 Заявить о нарушении
Сергей Елисеев 06.02.2009 13:03 Заявить о нарушении
Сергей Елисеев 07.02.2009 20:46 Заявить о нарушении