Full-Stack Developer!
I am my own cloud.
Tell Me More

Skills

Including, but not limited to...

Responsive Frontend

Vue JavaScript and MVVM pattern for a separation of UI from Backend and HTML5 to resize, hide or move the content to make it look good on any screen.

E-Commerce

Integration using Lumen and horizontal scaling for your platform with a unified API for over 20 top-quality online shopping solutions.

PHP Backend

Powerful and flexible applications are required to run complex web applications following the MVC architectural pattern with Laravel 5.

Portfolio

My work on a page.

About

My evolution into the present digital world.

  • 1999-2002

    My Humble Beginning

    As a student at Technical University I have started to work part-time for a Milk Processing Company as a DBA, System & Network administrator... the young smarty "Cable Guy"

  • 2002-2005

    HTML, CSS & ActionScript

    Playing with Photoshop & slicing it was the first step to go deeper into HTML & CSS and later to start scripting and bringing "Life" to websites using Macromedia's ActionScript avantgardist technology.

  • 2005-2010

    Databases, Programming and ExtJS

    Expansion of internet and Web 2.0 required more powerful applications than simple web sites. Web as Platform pushed me to extend my skills into PHP programming and relational databases, unstandardized browsers and cross platform compatibilities was accomplished using ExtJS!

  • 2010-2015

    Mobile development was born

    Explosion of mobile internet meant more processing on the client side and more sophisticated UIs to support touch events and responsive designs. It was time to introduce myself to Javascript Frameworks like jQuery, jQTouch and Sencha! Hybrid mobile applications forced me to have my own root servers to host the edge web technologies like NodeJS and NoSQL databases.

  • 2015-present

    Scalability, MVC, MVVM architectures

    Current demand of power and quality requires Unitesting, powerful and flexible frameworks like Laravel 5 for backend with a MVC architecture and VueJS for the frontend with a MVVM architecture for data binding creating a beautiful harmony between backend, frontend and APIs!

  • PaaS-Diagram Image
  • MVC Architecture Image
  • to be
    continued

My current Interests

Private and futuristic projects for weekends

Handywork DIY

Gym Replacement

Raspbery Pi

Home Automation

Computer Vision

OpenCV & Deep Learning

I firmly believe that being self sufficient is one of the most important things a person can do. Continuous Self-education can diversify your skills in almost endless areas. Doing any Handywork not only you will get a practical mind set but it will make you to understand how important is the Team Work if you want to build bigger and stronger structures.

Contact Me

Ask for my CV or send me your thoughts.

A Proposed Design for
Distributed Artificial General Intelligence

Version 2.2
By Matt Mahoney, Oct. 13, 2008

Abstract

This document describes a proposed design for a globally distributed artificial general intelligence (AGI) for the purpose of automating the world economy. The estimated value is on the order of US $1 quadrillion. The cost of a solution would be of the same order if we assume a million-fold decrease in the costs of computation, memory, and bandwidth, solutions to the natural language, speech, and vision problems, and an environment of pervasive public surveillance. The high cost implies decentralized ownership and a funding model that rewards intelligence and usefulness in a hostile environment where information has negative value and owners compete for attention, reputation, and resources.

The proposed solution is called competitive message routing (CMR). To a human user or a specialized intelligent server, CMR is a content-searchable message pool to which anyone may post. AGI is achieved by routing messages to the right specialists. CMR is implemented as a peer to peer network where incoming messages are cached, matched to stored messages, and each is forwarded to the sources of the other message. Economic, security, and long term safety issues are discussed. A specific protocol is proposed, defining a message format, sender authentication, and transport over existing internet protocols.

1. Goals

The purpose of AGI is twofold: first to improve the efficiency of organizations of humans by facilitating communication and access to information, and second, to automate the economy with respect to those functions that require human labor.

With respect to communication, the goal is to make information easy to find and publish. To send a message, you simply speak or type it into your computer or phone to nobody in particular, and it goes to anyone who cares, human or machine. If your message is in the form of a question, then it goes to anyone who can or has already answered it. If it is in the form of a statement, it goes to anyone whose question (past or future) it answers. Sending a message may initiate a public conversation with others that share your interests. When enough narrowly intelligent experts are added to the network, you should not care (or may prefer) that your conversation be with machines rather than humans.

With respect to automating labor, the goal is to reduce costs. It is not simply to replace existing jobs with machines, e. g. replacing truck drivers and lawyers, resulting in massive unemployment. Rather, easy access to an immense global database and computing infrastructure should result in new ways of solving problems, for example, the way shopping on the internet has supplemented traditional markets and created new job opportunities for web designers. Although the trend is clear that we are better off with automation, the details are difficult to predict, so I will not attempt to do so. Fifty years ago we imagined a future where robots pumped gas and tourists went to Mars, not the other way around.

2. Cost

The cost of AGI is estimated to be on the order of US $1 quadrillion.

First, we assume that Moore's Law will contine to halve the cost of computing power, storage, and network bandwidth every year or two for at least the next 30 years, as it has done for the last 50 years or so.

Second, we assume that there are no fundamental obstacles to solving hard AI problems such as language and vision, other than lack of computing power. In the worst case, these could be implemented using human brain sized neural networks, specifically, 1011 neurons and 1015 synapses modeled at 10 ms resolution. An equivalent artificial neural network would require about 1015 bytes of memory and 1017 operations per second. As of 2008 this would require about 20 doublings of Moore's law to make brain-sized computing power affordable to individuals. It is possible that more efficient solutions may be found sooner. In any case, we may ultimately neglect the cost of hardware.

Third, we assume that people will want AGI. It implies pervasive surveillance, since it will be the cheapest way for it to acquire the necessary knowledge. We imagine that a search for "where was Matt Mahoney last Saturday?" would produce a map, annotated with links to hundreds of videos recorded on public cameras indexed by face and license plate recognition software, and links to any conversations I made through computers or face to face within range of a public microphone. These conversations would be instantly indexed, summarized, and sent to anyone with an interest in what I said. At the same time, I would be notified of your query. We assume that people will want their conversations public because of the convenience, for example:

  She: Hi dear. Could you pick up some Chinese on the way home?
  He: OK, the usual?
  Wok-in-the-box: Your order will be ready in 5 minutes.

In building AGI, we can choose between spending more money to get it sooner vs. spending less by waiting until hardware costs drop. The optimal point is the value of the labor replaced divided by market interest rates. As of 2006, the value of labor worldwide was worth US $66 trillion. If we assume an interest rate of 6%, then the value would be $1 quadrillion. However, this is a moving target. The world GDP is increasing at about 5% annually. At this rate, it will be worth $4 quadrillion in 30 years.

As a best case scenario, we may assume that the major cost of AGI will be software and knowledge, which is not subject to Moore's Law. To make knowledge acquisition as cheap as possible, we will assume that the AGI understands language, speech, and images so it is not necessary to write code. We can train AGI on already published information plus surveillance of our normal activities rather than explicit instruction. It remains to estimate how much knowledge we need, how much is already available, and the rate that the remainder can be acquired.

In order to automate the economy, AGI must have knowledge equivalent to the world's population of about 1010 human brains. According to recall tests performed by Landauer, humans learn at a rate of 2 bits per second and have a long term memory capacity of about 109 bits. This is also about the amount of language processed since birth by an average adult, assuming 150 words per minute, several hours per day, at a rate of 1 bit per character as originally estimated by Shannon.

This implies that AGI needs 1019 bits of knowledge to automate the economy, except that people have shared knowledge. One of the advantages of machines over humans is that knowledge can be copied easily, eliminating the need to build millions of schools to train billions of agents. However, some fraction of knowledge is unique to each job and can't be copied. Organizations become more efficient when their members specialize. The fraction of shared knowledge is hard to estimate, but we can get some idea from the cost of replacing an employee, sometimes a year's salary. If we assume that 90% to 99% of human knowledge is shared, then AGI needs 1017 to 1018 bits of knowledge.

Currently, the internet has insufficient data to train AGI. A quick Google search for common English words ("the", "of", "a") shows that the accessible part of the internet is about 3 x 1010 web pages as of 2008. If each page has a few kilobytes of text, then there is about 1014 bits of knowledge available after compression. But even if it were 1016 bits, it would be far less than the amount needed. The rest still needs to be extracted from human brains.

We are fundamentally limited by the speed with which humans can communicate. Humans convey information at the same rate that they store it, about 2 bits per second. At current labor rates (US $5/hour worldwide), this implies a lower bound on cost of $100 trillion to $1 quadrillion as this information is gathered over several years from the world's population. However, this is a moving target. As we are developing language, vision, and surveillance capabilities to reduce knowledge acquistion costs, organizations are also becoming more efficient through increased specialization, with less duplication of knowledge and skills. In the worst case, an optimally efficient human economy with 1019 bits could cost on the order of $10 quadrillion in today's dollars to automate.

3. An AGI Design

I propose a design for AGI called competitive message routing (CMR). To a client, CMR looks like a pool of messages which can be searched by content. A client can either be a human user or a machine (an expert) providing some specialized service, such as a calculator, database, or narrow AI application. When a client posts a message (adds it to the pool), it goes to anyone who has previously posted a similar message, and those similar messages are returned to the client. AGI is achieved by attaching lots of experts and routing messages to the right experts.

We make no distinction between queries and documents. A message may be used to ask a question, answer a question, post information, or initiate a public, interactive conversation with someone who shares your expressed interest. Every message is tagged with the name of its creator and time of creation. If the information was obtained from elsewhere, then the original sources should be included. Messages cannot be deleted or modified once they are added to the pool. However, updates can be posted and the latest version can be identified by its timestamp.

3.1. Competitive Message Routing

CMR is implemented on a peer to peer network, as described in my thesis. A message occupies a point in an m-dimensional semantic space, for example, a vector space model. Peers have a cache of messages, some of which originated from other peers. When a client creates a message X, the peers have the goal of routing X to the peers that hold messages close to it, and sending those responses back to the originator as well as any peers through which X was routed.

Example: suppose Alice wants to know what is the largest planet. She doesn't know or who might know, but knows Bob from the message M1 he sent last week: "Hello". She asks Bob. Bob doesn't know, but guesses that Charlie is an expert on planets because he received the message M2 last month: "Mercury is the innermost planet". The conversation goes like this:

  Alice -> Bob: X = "What is the largest planet?"
  Bob -> Charlie: "Alice asked a minute ago: What is the largest planet?"
  Charlie -> Alice, Bob: M3 = "Dave said last year: Jupiter is the largest planet."

Fig. 1. Traversing messages in semantic space toward goal X
Fig. 1. Traversing messages in semantic space toward goal X.

The stored messages M1, M2, and M3 get progressively closer to X in semantic space, as shown in Fig. 1. Each peer knows only a subset of the messages in this space. The originator of the message serves as a link to other peers whose semantic regions overlaps. When no more progess can be made in approaching X, the last peer replies to all involved in routing X with the closest known match.

For a graph of n vertexes embedded in a semantic space of m dimensions, progress toward a target in space is possible in O(log n/log m) time when the average degree of the graph is at least 2m. For example, when m = 1, a binary tree will suffice. For a natural language semantic space, m is the size of the vocabulary, about 105.

As explained in my thesis, the model is robust. Each update adds more direct links to improve response time to similar messages, and new links to the graph to replace failed peers and links. (For example, Alice now knows that Charlie and Dave both know about Jupiter). Each query results in more copies of the requested message, to help load balancing. This robustness compensates for differences in the semantic models of different peers. A client may also post a message to several peers to improve the expected number of responses.

3.2. Routing Strategy

An organization is optimally efficient when there is no unnecessary duplication of knowledge among peers beyond what is needed for fault recovery. This is achieved by a market economy where information has negative value. Peers have an incentive to offload information to other peers and delete their own copy as long as it remains accessible. Peers can mutually benefit by trading messages if both parties can compress the received messages more tightly than the sent messages.

Trading results in peers storing groups of similar messages, clusters in semantic space. We define a distance between messages X and Y as D(X, Y) = K(Y|X) + K(X|Y) where K is Kolmogorov complexity, i.e. K(Y|X) = K(XY) - K(X) is the length of the shortest program that outputs Y given X as input. K is not computable in general, so for practical purposes we substitute a compression difference measure, C(X, Y) = C(Y|X) + C(X|Y) where C(Y|X) = C(XY) - C(X) and C(XY) means the compressed size of X concatenated with Y. The better the compression algorithm, the more closely C approximates K.

D is compatible with Euclidean distance in the vector space model because it has the properites of a distance measure. It has the following properties for distinct messages X, Y, and Z:

  • D(X, X) = 0.
  • D(X, Y) > 0.
  • D(X, Y) = D(Y, X).
  • D(X, Y) + D(Y, Z) ≥ D(X, Z).

We can now describe a routing policy. A peer has a cache of messages received from other peers. We assume that messages cluster within peers, meaning D(X, Y) is likely to be smaller if X and Y are stored on the same peer than on different peers. Suppose Alice receives message X and must decide who to route it to. Alice computes D1 = D(X, Y1), D2 = D(X, Y2), ... where Yi is the concatenation of all messages received from peer i. Then Alice routes X to the i that minimizes Di. This is Alice's best estimate of the peer that can compress X the smallest.

Let's say the best match to X is i = Bob. If Bob is one of the senders of X, then Alice should keep X rather than send it back. To compensate for her storage cost, Alice should send a different message back to Bob, one that takes a lot of space in Alice's cache but that Bob can easily compress. Then for each message Zj in Alice's cache where Bob is not one of the senders, Alice computes Dj = D(Zj, Zi=1..n,i≠j) - D(X, Zj) and reply with Zj that maximizes Dj. The term Zi=1..n,i≠j means the concatenation of all of the messages in the cache except Zj.

X is a guess about what Bob knows. Alice can concatenate other messages from Bob to X in computing D(X, Zj)

When Alice sends or forwards a message, she does not delete it right away. She keeps it in the cache until she needs space. Peers are free to choose their own deletion policies. This requires some intelligence. Otherwise peers that provide unlimited free space can be exploited by non-cooperating peers.

3.3. Flow Control

Peers may choose their routing strategies independently using different compression algorithms. This introduces an uncertainty as to which message matches the closest. We compensate by routing messages to more than one peer when all of them are fairly close. In the previous example we might have additional messages such as:

  Bob -> Alice: "Charlie said last month: Mercury is the innermost planet"
  Charlie -> Dave: "Bob said 1 minutes ago that Alice asked 2 minutes ago: What is the largest planet?"
  Dave -> Alice, Bob, Charlie: "Jupiter is still the largest planet"

However, a ratio of output to input greater than 1 could lead to an exponential explosion of messages. To compensate, peers should delete duplicate messages and establish a distance threshold or other criteria such that if there is no good match then the message is discarded. In general, there is no clean solution. A good routing strategy requires intelligence and an economic policy that rewards intelligence as discussed in section 4.

4. Security and Economic Considerations

AGI is too expensive for any one person or group to own or be able to control any significant part of it. The system must be decentralized. There is no central authority to enforce the peer to peer protocol. Messages may not conform to the protocol and may be hostile. Peers will need to deal with false information, spam, scams, flooding attacks, forgery, attached viruses and worms, and malformed or malicious messages that attempt to exploit software flaws such as buffer overflows to crash the recipient or gain unauthorized access.

In order for AGI to be built, there must be an economic incentive for users to add well-behaved peers to the system. They must have an incentive to make CPU, memory, and bandwidth (resources) and high quality information available. They must have an incentive to prevent their peers from being exploited by forwarding malicious traffic.

The economic model of CMR is based on information having negative value on average. People are willing to pay to have their messages received by others. Peers compete for attention, reputation, and resources in a hostile environment. This requires that peers rank others by the quality of information received from them and to reject messages from peers with poor reputations. Distinguishing between high and low quality information requires intelligence. It requires work by users, either to rate messages to adjust the reputation of senders, or to specify complex filtering policies. This work may be a significant fraction of the total cost of AGI, perhaps the single greatest cost.

CMR supports an advertising model. However, peers must target ads carefully to those who want them, or risk being blocked or having to pay other peers to forward their ads. Users need to supply useful services and information in a competitive market in order to make a profit. This is the other major cost of AGI.

Reputation management requires that peers reliably identify their sources. CMR allows senders to sign messages independently of any underlying protocol that might also provide this service (such as HTTPS). A CMR digital signature is based on a secret key shared by the sender and receiver. If Bob receives a signed message from Alice, then Bob can verify that the message is from the same source that claimed to be Alice in other signed messages, and that the message was not altered or forged by anyone not knowing the key. Keys are not reused between any other pair of peers to minimize the damage if a key is compromised.

When Alice sends a message to Bob for the first time, they may establish a key using one time public RSA key pairs, by Diffie-Hellman (DH) key exchange, or by other methods outside the CMR protocol. However there is no foolproof method of key exchange. RSA and DH are susceptible to man in the middle attacks if the attacker is able to intercept messages from both peers.

Signatures are used only between immediate neighbors. If Alice sends a message to Bob, which is forwarded to Charlie, then Charlie can only verify Bob's signature and must trust that Bob verified Alice. If in doubt, Charlie should ask Alice directly. If Charlie determines that Bob's message was bogus, then Charlie should downgrade only Bob's reputation, since Bob's claim that it is from Alice cannot be trusted.

5. Long Term Safety of AGI

AGI will obviously have a huge impact on society. In particular Vernor Vinge has raised the possibility of runaway AI resulting in a technological singularity. Organizations such as SIAI and the Lifeboat Foundation were formed to address the existential threats of an intelligence explosion of unfriendly AI. I will address these concerns with respect to CMR.

5.1. Recursive Self Improvement

One possibility is that a smarter than human AI could produce even smarter AI in a process of recursive self improvement (RSI). In this scenario, the first program to achieve superhuman intelligence would be the last invention that humans would need to create. One of the goals of SIAI has been to develop that seed AI and ensure that it remains friendly through successive generations.

I do not believe this approach is viable. Currently there are no physical, mathematical, or software models of RSI for any reasonable definition of what "intelligence" means. I showed (PDF) (HTML) that RSI is possible, but only in a trivial sense. The maximum rate of improvement is O(log n), no faster than a counter. Intelligence requires information, and information can't come from nowhere.

An often cited example of RSI is the growth of human civilization. That is not the case. Economic, cultural, and technological growth is a form of self organization, not self improvement. Individual humans can do much more today than 1000 years ago, but our brains are no different. Without language, modern humans probably would not think to produce spears out of sticks and rocks, much less produce computers.

In the short term, CMR produces AGI from lots of machines that individually have low intelligence and only do what they are programmed to do by humans. These machines have no goals and cannot improve themselves. However, CMR rewards intelligence, where intelligence is defined as successful ability to acquire computing resources. The absence of RSI only means that improvement is an evolutionary process in which programs do not set the criteria for testing intelligence in their copies.

5.2. Uploading

Humans, like all animals, have evolved a fear of death (actually, a fear of the many things that can kill us), because it increases the expected number of children. Some people may wish to create programs that simulate their brains and have them turned on after they die. Such a program would have the same memories, skills, and simulated emotions as the original and be indistinguishable to friends and relatives.

We call such a program an upload. It is not necessary to develop any new technology (such as brain scanning) beyond what we have already assumed: a million-fold drop in computing costs and extensive surveillance to extract 109 bits of knowledge from every human. Any memory differences between the original and copy could be filled in with plausible details, and the copy would not appear to notice. We might assume other technology is developed later, such as robotic embodiment.

Whether an upload actually transfers the original's consciousness is an irrelevant, philosophical question. What is important is that to others it will appear that the original has been brought back to life, making it a popular option. As I described CMR, it is friendly because humans own computing resources and the machines themselves have no rights. The danger is that people will want their uploads to have the same rights as they had when they were alive, including ownership of resources. This will result in humans competing with machines of their own creation rather than just each other. It also raises difficult legal issues because uploads could reproduce themselves rapidly, modify their own software, and there is no clear legal or technical distinction between uploads and other types of programs.

5.3. Intelligent Worms

CMR is an environment where peers compete for resources. These could be stolen by trickery or exploiting software flaws. Currently, internet worms exploit flaws discovered by humans to make copies of themselves. Intelligent worms with language models that include programming skills are much more dangerous for 4 reasons.

  1. Worms could analyze source code and executable code to discover thousands of security vulnerabilities for which no patches have been developed.
  2. Worms could use language to trick humans into installing them. ("Please enter your administrative password to install 79 updates").
  3. Worms could modify their source code and evolve. Evolution favors worms that reproduce successfully and evade detection.
  4. Once your computer is infected, it could intelligently monitor your activities. ("No viruses detected").
It is possible that every computer could be quickly infected and we would not know.

5.4. Redefining Humanity

Whether or not we have RSI, uploading, or intelligent worms, at some point the total knowledge and computation implemented in machines will exceed that of human brains. Beyond that, it would matter little to the global brain whether humans were extinct or not. It not, then humans would at least be unaware of the intelligence that controlled the world around them, just as dogs are unaware of the human intelligence or goals of their breeders.

We should hope that the collective intelligence would be benevolant to humans, but what does that mean? Humans want happiness, but happiness is just a reward signal controlled by a complex function that evolved to increase reproductive fitness. Expressed as a reinforcement learner, the human brain implements an optimization process whose goal is to maximize the scalar utility U(x) over the 2109 to 21015 possible mental states, x. We experience happiness when we go from a mental state x1 to x2 where U(x1) < U(x2). At some point there is a maximum where any thought or sensory awareness would be unpleasant because it would result in a different mental state.

A possible way out is to augment the brain with additional memory so that we never run out of mental states. Then what do we become? At some point the original brain becomes such a tiny fraction that we could discard it with little effect. As a computer, we could reprogram our memories and goals. Instead of defining happiness as getting what we want, we could reprogram ourselves to want what we have. But in an environment where programs compete for atoms and energy, such systems would not be viable. Evolution favors programs that fear death and die, that can't get everything they want, and can't change what they want.


Appendix A
CMR Protocol Specification

A1. Overview

This section describes a proposed implementation of competitive message routing (CMR) protocol. CMR is a distributed message posting and search service. When a client posts a message, it has the effect of adding the message to the pool and retrieving related messages that are either already in the pool or that are posted later by other clients. There is no provision to delete or modify messages, once posted.

A CMR network is composed of peers. Any peer may send messages to any other peer. Every peer should have a globally unique address for receiving messages.

A peer is either a router or a client. A client may either be an interface to a human user or to a server. CMR protocol specifies only the behavior of routers. Clients do not have to follow any rules. Thus, peers cannot tell if incoming messages are from routers or clients because it is possible for a client to emulate a router.

A router has a store of messages called a cache. The cache contains previously received messages. A router cannot create new messages. Only a client can do that.

A2. Message Format

A CMR message is a string of 8-bit bytes. A message consists of a signature, a routing header, and a message body.

A signature consists of a line of text ending with a carriage return (CR, ASCII 13) and a linefeed (LF, ASCII 10). The first character of the signature is a version number, either 0 (ASCII 48) or 1 (ASCII 49), followed by a hash string. If the version number is 0 then the hash string is empty and the message is said to be unsigned. If it is 1, then the hash string is the SHA-256 hash of a secret key, kab concatenated with the routing header and message body. The hash is written as 64 lower case hexadecimal digits. kab should be known only to the sender whose address appears in the first line of the routing header and the receiver. kab may be of any length. In computing the hash, no bytes are placed after the key before the routing header.

A routing header consists of one or more message IDs. A message ID is a line consisting of a timestamp, a space character (ASCII 32), a server address and a CR LF to terminate the line. The message ID means that the message was sent at the indicated time by the peer with the given address. The message IDs are ordered from newest to oldest. Thus, the last line of the header identifies the client that created the message and the time it was created. The last line of the header is followed by a blank line (an additional CR LF).

A message ID uniquely identifies a message. No two messages that contain matching IDs anywhere in their routing headers should differ anywhere in the remainder of the header or in the body.

A timestamp has the format YYYY/MM/DD HH:MM:SS[.S*] (year, month, day, hour, minute, seconds, optional fractional seconds) in the range 0000/01/01 00:00:00 to 9999/12/31 23:59:59.999... . Times are universal times (UT, formerly GMT). A decimal point after the seconds is optional. If it appears, it may be followed by any number of decimals. A timestamp must represent a valid time, not in the future, and earlier than the timestamp on the line above it, if any.

An address is an internet server address. It has the form of a URL if the underlying transport protocol supports it. Otherwise it is any string not containing the CR or LF characters. No address should appear more than once in a header. The recipient's address should not appear at all.

The message body consists of a length, n, written as a decimal number followed by CR LF, then n bytes (ASCII 0 through 255). Normally the body will contain human understandable data such as ASCII or UTF-8 encoded text, or suitably encoded audio, images, or video in common formats. However, there is no restriction on content. For example:


101c1ad32d6fe646a56edba43bd342ee11e45ba188c59aeec91a5590a0ea114d5
2008/09/01 00:00:53.42 http://alice.com/cmr
2008/09/01 00:00:53 udp://192.168.0.102:53/dns-tunnel
2008/08/31 23:59:01.0955 https://www.charlie.com/messages.pl

14
Hello World!

In this example, kab is the 3 byte string "foo", known only to the peer with address http://alice.com/cmr and the message recipient. The message was created by the client whose address is https://www.charlie.com/messages.pl.

A3. Router Behavior

When a router receives a message, it should reject (ignore and discard) any message that does not conform to the format described in the last section. It should reject any message with an ID that matches an ID of another message already in the cache. If the message is signed, then it should compute the signature and reject it if it does not match. It may also reject a message for any reason, for example, if the sender has a low reputation, or the content is recognized as spam or malicious, or if it cannot keep up with input. CMR does not require any policy.

If a message X is accepted, then it should be matched to zero or more messages in the cache and added to the cache. For each message Y matched to X, it should forward X to zero or more addresses that appear in the header of Y, and forward Y to zero or more addresses that appear in the header of X.

Router Alice forwards message X to peer Bob as follows:

  1. Alice removes the signature line from X.
  2. Alice adds X to her cache.
  3. Alice assigns X := T | " " | Alice | CR | LF | X, where T is the current time, " " is a space, and | means string concatenation. Alice's clock should be of sufficient resolution that each forwarded message has a different value of T.
  4. If Alice knows the shared secret key Kab with Bob, then Alice assigns X := "1" | SHA-256(kab | X) | CR | LF | X. Otherwise Alice assigns X := "0" | CR | LF | X.
  5. Alice sends X to Bob.

CMR does not specify a cache deletion policy. A router may remove messages from its cache at any time.

A4. Transport

CMR is a super-application layer protocol. It may be sent over existing internet protocols such as HTTP or HTTPS or in the body of an email message. In general, a peer implements a single server protocol such as HTTP (appearing as a web server) and multiple client protocols such as HTTP (appearing as a web browser), and SMTP (to send email). The following describe the protocol for Alice to send a message to Bob:

A4.1. SMTP

The recipient has a server address of the form mailto:email-address such as mailto:bob@bob.com. Alice sends an email message to Bob. The body of the email message may be either a CMR message or a MIME encoded file attachment containing a CMR message. The subject line is irrelevant. The recommended subject is "cmr".

A4.2. HTTP

The recipient has a server address in the form of a URL beginning with http:// such as http://bob.com/cgi-bin/cmr.pl. Messages are sent as plain text files using file upload protocol as described in RFC 1867.

A4.3. HTTPS

The protocol for HTTPS is the same as for HTTP. HTTPS also encrypts the message and provides for a secondary means of authentication.

A4.4. UDP

UDP represents a worst case scenario because a message is contained in a single packet with no assurance that the source IP address is correct. The server address has the form udp://IP-address:port/string (not a standard URL), such as in the previous example. The string is used to identify which local CMR server listening on the port should receive the message. A message should fit in the payload of a single packet (up to about 64 KB).

A4.5. HTTP Handshake

Handshake protocols are appropriate when the sender can reliably send a message to a receiver but the receiver cannot verify the sender's address. This model is usually assumed for email and HTTP requests. We assume that Alice and Bob both have HTTP server addresses, say, http://alice.com/ and http://bob.com/, and that they do not share a key. The exchange is:

  1. Alice sends an HTTP GET request of the form receiver?request=sender&key=secret, for example
    http://bob.com/?request=http://alice.com/&key=foo
  2. Bob replies with a blank web page and closes the connection.
  3. Bob sends an HTTP GET request to Alice of the form sender?reply=secret, for example
    http://alice.com/?reply=foo
  4. Alice replies to the request with a web page of Content-Type: text/plain containing the unsigned message.
  5. Alice and Bob discard the key. They use a new key for each message.

A5. Key Exchange

When Alice sends a message to Bob for the first time and they do not share a secret key, a key may be established in a number of ways that are defined outside the CMR protocol. A key is unique to each pair of peers.

A5.1 RSA

Alice generates a one time RSA key and sends a message to Bob with her public key. Bob replies by choosing a random key, encrypting it with her public key and sending it to Alice. If no key has been established, then both messages are unsigned. Otherwise, both messages are signed with the old keys and the new key takes effect for both Alice and Bob on the next message.

The protocol is as follows. Alice chooses two large prime numbers p and q, a public exponent e, and a private exponent d such that ed = 1 (mod lcm(p-1,q-1)), and computes n = pq. Alice sends a message to Bob with the following in the body:


  RSA key exchange request=n,e.

where all numbers are written in lower case hexadecimal. Bob then chooses a secret key k, computes c = ke (mod n) and replies to Alice:


  RSA key exchange reply=c.

Alice then computes key k = cd (mod n) and discards p, q, n, and d. Alice and Bob remove the two messages from their caches.

A5.2. Diffie-Hellman

In Diffie-Hellman key exchange, Alice chooses a secret number, a, a large prime number p, and a primitive root g of p. Alice computes A = ga (mod p) and sends a message to Bob of the form:


  DH key exchange request=g,p,A.

where all numbers are written in lower case hexadecimal. Bob chooses a secret number, b, computes B = gb (mod p) and replies to Alice:


  DH key exchange reply=B.

Alice and Bob then agree to use key k = Ba (mod p) = Ab (mod p), computed by Alice and Bob respectively.

As with RSA, messages are either unsigned or signed with old keys.

A5.3. Secure Channels

If CMR is implemented on top of a secure protocol such as HTTPS or SSH, then the key may be chosen by Alice and sent directly in a single message to Bob:


  Clear key exchange=k.

The key is written in lower case hexadecimal and is converted into bytes (two digits per byte) when computing authentication strings. For example, if the key is "foo", then the message is:


  Clear key exchange=666f6f.

(In practice, a longer key would be used).

Notes

This documentation is a revision of my (Matt Mahoney) original proposal dated Dec. 6, 2007.

Source: http://mattmahoney.net/agi2.html

Vernor Vinge

Department of Mathematical Sciences
San Diego State University

(c) 1993 by Vernor Vinge
(This article may be reproduced for noncommercial purposes if it is copied in its entirety, including this notice.)

The original version of this article was presented at the VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, March 30-31, 1993. A slightly changed version appeared in the Winter 1993 issue of Whole Earth Review.

Abstract

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented.

What is The Singularity?

The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur):

  • There may be developed computers that are "awake" and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is "yes, we can", then there is little doubt that beings more intelligent can be constructed shortly thereafter.)
  • Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.
  • Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
  • Biological science may provide means to improve natural human intellect.

The first three possibilities depend in large part on improvements in computer hardware. Progress in computer hardware has followed an amazingly steady curve in the last few decades [17]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt [20] has pointed out that AI enthusiasts have been making claims like this for the last thirty years. Just so I'm not guilty of a relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030.)

What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities -- on a still-shorter time scale. The best analogy that I see is with the evolutionary past: Animals can adapt to problems and make inventions, but often no faster than natural selection can do its work -- the world acts as its own simulator in the case of natural selection. We humans have the ability to internalize the world and conduct "what if's" in our heads; we can solve many problems thousands of times faster than natural selection. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals.

From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in "a million years" (if ever) will likely happen in the next century. (In [5], Greg Bear paints a picture of the major changes happening in a matter of hours.)

I think it's fair to call this event a singularity ("the Singularity" for the purposes of this paper). It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown. In the 1950s there were very few who saw it: Stan Ulam [28] paraphrased John von Neumann as saying:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

Von Neumann even uses the term singularity, though it appears he is thinking of normal progress, not the creation of superhuman intellect. (For me, the superhumanity is the essence of the Singularity. Without that we would get a glut of technical riches, never properly absorbed (see [25]).)

In the 1960s there was recognition of some of the implications of superhuman intelligence. I. J. Good wrote [11]:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the _last_ invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. ... It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make.

Good has captured the essence of the runaway, but does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankind's "tool" -- any more than humans are the tools of rabbits or robins or chimpanzees.

Through the '60s and '70s and '80s, recognition of the cataclysm spread [29] [1] [31] [5]. Perhaps it was the science-fiction writers who felt the first concrete impact. After all, the "hard" science-fiction writers are the ones who try to write specific stories about all that technology may do for us. More and more, these writers felt an opaque wall across the future. Once, they could put such fantasies millions of years in the future [24]. Now they saw that their most diligent extrapolations resulted in the unknowable ... soon. Once, galactic empires might have seemed a Post-Human domain. Now, sadly, even interplanetary ones are.

What about the '90s and the '00s and the '10s, as we slide toward the edge? How will the approach of the Singularity spread across the human world view? For a while yet, the general critics of machine sapience will have good press. After all, till we have hardware as powerful as a human brain it is probably foolish to think we'll be able to create human equivalent (or greater) intelligence. (There is the far-fetched possibility that we could make a human equivalent out of less powerful hardware, if we were willing to give up speed, if we were willing to settle for an artificial being who was literally slow [30]. But it's much more likely that devising the software will be a tricky process, involving lots of false starts and experimentation. If so, then the arrival of self-aware machines will not happen till after the development of hardware that is substantially more powerful than humans' natural equipment.)

But as time passes, we should see more symptoms. The dilemma felt by science fiction writers will be perceived in other creative endeavors. (I have heard thoughtful comic book writers worry about how to have spectacular effects when everything visible can be produced by the technologically commonplace.) We will see automation replacing higher and higher level jobs. We have tools right now (symbolic math programs, cad/cam) that release us from most low-level drudgery. Or put another way: The work that is truly productive is the domain of a steadily smaller and more elite fraction of humanity. In the coming of the Singularity, we are seeing the predictions of _true_ technological unemployment finally come true.

Another symptom of progress toward the Singularity: ideas themselves should spread ever faster, and even the most radical will quickly become commonplace. When I began writing science fiction in the middle '60s, it seemed very easy to find ideas that took decades to percolate into the cultural consciousness; now the lead time seems more like eighteen months. (Of course, this could just be me losing my imagination as I get old, but I see the effect in others too.) Like the shock in a compressible flow, the Singularity moves closer as we accelerate through the critical speed.

And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur faster than any technical revolution seen so far. The precipitating event will likely be unexpected -- perhaps even to the researchers involved. ("But all our previous models were catatonic! We were just tweaking some parameters....") If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened.

And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind. We will be in the Post-Human era. And for all my rampant technological optimism, sometimes I think I'd be more comfortable if I were regarding these transcendental events from one thousand years remove ... instead of twenty.

Can the Singularity be Avoided?

Well, maybe it won't happen at all: Sometimes I try to imagine the symptoms that we should expect to see if the Singularity is not to develop. There are the widely respected arguments of Penrose [19] and Searle [22] against the practicality of machine sapience. In August of 1992, Thinking Machines Corporation held a workshop to investigate the question "How We Will Build a Machine that Thinks" [27]. As you might guess from the workshop's title, the participants were not especially supportive of the arguments against machine intelligence. In fact, there was general agreement that minds can exist on nonbiological substrates and that algorithms are of central importance to the existence of minds. However, there was much debate about the raw hardware power that is present in organic brains. A minority felt that the largest 1992 computers were within three orders of magnitude of the power of the human brain. The majority of the participants agreed with Moravec's estimate [17] that we are ten to forty years away from hardware parity. And yet there was another minority who pointed to [7] [21], and conjectured that the computational competence of single neurons may be far higher than generally believed. If so, our present computer hardware might be as much as _ten_ orders of magnitude short of the equipment we carry around in our heads. If this is true (or for that matter, if the Penrose or Searle critique is valid), we might never see a Singularity. Instead, in the early '00s we would find our hardware performance curves beginning to level off -- this because of our inability to automate the design work needed to support further hardware improvements. We'd end up with some _very_ powerful hardware, but without the ability to push it further. Commercial digital signal processing might be awesome, giving an analog appearance even to digital operations, but nothing would ever "wake up" and there would never be the intellectual runaway which is the essence of the Singularity. It would likely be seen as a golden age ... and it would also be an end of progress. This is very like the future predicted by Gunther Stent. In fact, on page 137 of [25], Stent explicitly cites the development of transhuman intelligence as a sufficient condition to break his projections.

But if the technological Singularity can happen, it will. Even if all the governments of the world were to understand the "threat" and be in deadly fear of it, progress toward the goal would continue. In fiction, there have been stories of laws passed forbidding the construction of "a machine in the likeness of the human mind" [13]. In fact, the competitive advantage -- economic, military, even artistic -- of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first.

Eric Drexler [8] has provided spectacular insights about how far technical improvement may go. He agrees that superhuman intelligences will be available in the near future -- and that such entities pose a threat to the human status quo. But Drexler argues that we can confine such transhuman devices so that their results can be examined and used safely. This is I. J. Good's ultraintelligent machine, with a dose of caution. I argue that confinement is intrinsically impractical. For the case of physical confinement: Imagine yourself locked in your home with only limited data access to the outside, to your masters. If those masters thought at a rate -- say -- one million times slower than you, there is little doubt that over a period of years (your time) you could come up with "helpful advice" that would incidentally set you free. (I call this "fast thinking" form of superintelligence "weak superhumanity". Such a "weakly superhuman" entity would probably burn out in a few weeks of outside time. "Strong superhumanity" would be more than cranking up the clock speed on a human-equivalent mind. It's hard to say precisely what "strong superhumanity" would be like, but the difference appears to be profound. Imagine running a dog mind at very high speed. Would a thousand years of doggy living add up to any human insight? (Now if the dog mind were cleverly rewired and _then_ run at high speed, we might see something different....) Many speculations about superintelligence seem to be based on the weakly superhuman model. I believe that our best guesses about the post-Singularity world can be obtained by thinking on the nature of strong superhumanity. I will return to this point later in the paper.)

Another approach to confinement is to build _rules_ into the mind of the created superhuman entity (for example, Asimov's Laws [3]). I think that any rules strict enough to be effective would also produce a device whose ability was clearly inferior to the unfettered versions (and so human competition would favor the development of the those more dangerous models). Still, the Asimov dream is a wonderful one: Imagine a willing slave, who has 1000 times your capabilities in every way. Imagine a creature who could satisfy your every safe wish (whatever that means) and still have 99.9% of its time free for other activities. There would be a new universe we never really understood, but filled with benevolent gods (though one of _my_ wishes might be to become one of them).

If the Singularity can not be prevented or confined, just how bad could the Post-Human era be? Well ... pretty bad. The physical extinction of the human race is one possibility. (Or as Eric Drexler put it of nanotechnology: Given all that such technology can do, perhaps governments would simply decide that they no longer need citizens!). Yet physical extinction may not be the scariest possibility. Again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet.... In a Post-Human world there would still be plenty of niches where human equivalent automation would be desirable: embedded systems in autonomous devices, self-aware daemons in the lower functioning of larger sentients. (A strongly superhuman intelligence would likely be a Society of Mind [16] with some very competent components.) Some of these human equivalents might be used for nothing more than digital signal processing. They would be more like whales than humans. Others might be very human-like, yet with a one-sidedness, a _dedication_ that would put them in a mental hospital in our era. Though none of these creatures might be flesh-and-blood humans, they might be the closest things in the new enviroment to what we call human now. (I. J. Good had something to say about this, though at this late date the advice may be moot: Good [12] proposed a "Meta-Golden Rule", which might be paraphrased as "Treat your inferiors as you would be treated by your superiors." It's a wonderful, paradoxical idea (and most of my friends don't believe it) since the game-theoretic payoff is so hard to articulate. Yet if we were able to follow it, in some sense that might say something about the plausibility of such kindness in this universe.)

I have argued above that we cannot prevent the Singularity, that its coming is an inevitable consequence of the humans' natural competitiveness and the possibilities inherent in technology. And yet ... we are the initiators. Even the largest avalanche is triggered by small things. We have the freedom to establish initial conditions, make things happen in ways that are less inimical than others. Of course (as with starting avalanches), it may not be clear what the right guiding nudge really is:

Other Paths to the Singularity: Intelligence Amplification_

When people speak of creating superhumanly intelligent beings, they are usually imagining an AI project. But as I noted at the beginning of this paper, there are other paths to superhumanity. Computer networks and human-computer interfaces seem more mundane than AI, and yet they could lead to the Singularity. I call this contrasting approach Intelligence Amplification (IA). IA is something that is proceeding very naturally, in most cases not even recognized by its developers for what it is. But every time our ability to access information and to communicate it to others is improved, in some sense we have achieved an increase over natural intelligence. Even now, the team of a PhD human and good computer workstation (even an off-net workstation!) could probably max any written intelligence test in existence.

And it's very likely that IA is a much easier road to the achievement of superhumanity than pure AI. In humans, the hardest development problems have already been solved. Building up from within ourselves ought to be easier than figuring out first what we really are and then building machines that are all of that. And there is at least conjectural precedent for this approach. Cairns-Smith [6] has speculated that biological life may have begun as an adjunct to still more primitive life based on crystalline growth. Lynn Margulis (in [15] and elsewhere) has made strong arguments that mutualism is a great driving force in evolution.

Note that I am not proposing that AI research be ignored or less funded. What goes on with AI will often have applications in IA, and vice versa. I am suggesting that we recognize that in network and interface research there is something as profound (and potential wild) as Artificial Intelligence. With that insight, we may see projects that are not as directly applicable as conventional interface and network design work, but which serve to advance us toward the Singularity along the IA path.

Here are some possible projects that take on special significance, given the IA point of view:

  • Human/computer team automation: Take problems that are normally considered for purely machine solution (like hill-climbing problems), and design programs and interfaces that take a advantage of humans' intuition and available computer hardware. Considering all the bizarreness of higher dimensional hill-climbing problems (and the neat algorithms that have been devised for their solution), there could be some very interesting displays and control tools provided to the human team member.
  • Develop human/computer symbiosis in art: Combine the graphic generation capability of modern machines and the esthetic sensibility of humans. Of course, there has been an enormous amount of research in designing computer aids for artists, as labor saving tools. I'm suggesting that we explicitly aim for a greater merging of competence, that we explicitly recognize the cooperative approach that is possible. Karl Sims [23] has done wonderful work in this direction.
  • Allow human/computer teams at chess tournaments. We already have programs that can play better than almost all humans. But how much work has been done on how this power could be used by a human, to get something even better? If such teams were allowed in at least some chess tournaments, it could have the positive effect on IA research that allowing computers in tournaments had for the corresponding niche in AI.
  • Develop interfaces that allow computer and network access without requiring the human to be tied to one spot, sitting in front of a computer. (This is an aspect of IA that fits so well with known economic advantages that lots of effort is already being spent on it.)
  • Develop more symmetrical decision support systems. A popular research/product area in recent years has been decision support systems. This is a form of IA, but may be too focussed on systems that are oracular. As much as the program giving the user information, there must be the idea of the user giving the program guidance.
  • Use local area nets to make human teams that really work (ie, are more effective than their component members). This is generally the area of "groupware", already a very popular commercial pursuit. The change in viewpoint here would be to regard the group activity as a combination organism. In one sense, this suggestion might be regarded as the goal of inventing a "Rules of Order" for such combination operations. For instance, group focus might be more easily maintained than in classical meetings. Expertise of individual human members could be isolated from ego issues such that the contribution of different members is focussed on the team project. And of course shared data bases could be used much more conveniently than in conventional committee operations. (Note that this suggestion is aimed at team operations rather than political meetings. In a political setting, the automation described above would simply enforce the power of the persons making the rules!)
  • Exploit the worldwide Internet as a combination human/machine tool. Of all the items on the list, progress in this is proceeding the fastest and may run us into the Singularity before anything else. The power and influence of even the present-day Internet is vastly underestimated. For instance, I think our contemporary computer systems would break under the weight of their own complexity if it weren't for the edge that the USENET "group mind" gives the system administration and support people! The very anarchy of the worldwide net development is evidence of its potential. As connectivity and bandwidth and archive size and computer speed all increase, we are seeing something like Lynn Margulis' [15] vision of the biosphere as data processor recapitulated, but at a million times greater speed and with millions of humanly intelligent agents (ourselves).

The above examples illustrate research that can be done within the context of contemporary computer science departments. There are other paradigms. For example, much of the work in Artificial Intelligence and neural nets would benefit from a closer connection with biological life. Instead of simply trying to model and understand biological life with computers, research could be directed toward the creation of composite systems that rely on biological life for guidance or for the providing features we don't understand well enough yet to implement in hardware. A long-time dream of science-fiction has been direct brain to computer interfaces [2] [29]. In fact, there is concrete work that can be done (and is being done) in this area:

  • Limb prosthetics is a topic of direct commercial applicability. Nerve to silicon transducers can be made [14]. This is an exciting, near-term step toward direct communication.
  • Direct links into brains seem feasible, if the bit rate is low: given human learning flexibility, the actual brain neuron targets might not have to be precisely selected. Even 100 bits per second would be of great use to stroke victims who would otherwise be confined to menu-driven interfaces.
  • Plugging in to the optic trunk has the potential for bandwidths of 1 Mbit/second or so. But for this, we need to know the fine-scale architecture of vision, and we need to place an enormous web of electrodes with exquisite precision. If we want our high bandwidth connection to be _in addition_ to what paths are already present in the brain, the problem becomes vastly more intractable. Just sticking a grid of high-bandwidth receivers into a brain certainly won't do it. But suppose that the high-bandwidth grid were present while the brain structure was actually setting up, as the embryo develops. That suggests:
  • Animal embryo experiments. I wouldn't expect any IA success in the first years of such research, but giving developing brains access to complex simulated neural structures might be very interesting to the people who study how the embryonic brain develops. In the long run, such experiments might produce animals with additional sense paths and interesting intellectual abilities.

Originally, I had hoped that this discussion of IA would yield some clearly safer approaches to the Singularity. (After all, IA allows our participation in a kind of transcendance.) Alas, looking back over these IA proposals, about all I am sure of is that they should be considered, that they may give us more options. But as for safety ... well, some of the suggestions are a little scarey on their face. One of my informal reviewers pointed out that IA for individual humans creates a rather sinister elite. We humans have millions of years of evolutionary baggage that makes us regard competition in a deadly light. Much of that deadliness may not be necessary in today's world, one where losers take on the winners' tricks and are coopted into the winners' enterprises. A creature that was built _de novo_ might possibly be a much more benign entity than one with a kernel based on fang and talon. And even the egalitarian view of an Internet that wakes up along with all mankind can be viewed as a nightmare [26].

The problem is not simply that the Singularity represents the passing of humankind from center stage, but that it contradicts our most deeply held notions of being. I think a closer look at the notion of strong superhumanity can show why that is.

Strong Superhumanity and the Best We Can Ask for

Suppose we could tailor the Singularity. Suppose we could attain our most extravagant hopes. What then would we ask for: That humans themselves would become their own successors, that whatever injustice occurs would be tempered by our knowledge of our roots. For those who remained unaltered, the goal would be benign treatment (perhaps even giving the stay-behinds the appearance of being masters of godlike slaves). It could be a golden age that also involved progress (overleaping Stent's barrier). Immortality (or at least a lifetime as long as we can make the universe survive [10] [4]) would be achievable.

But in this brightest and kindest world, the philosophical problems themselves become intimidating. A mind that stays at the same capacity cannot live forever; after a few thousand years it would look more like a repeating tape loop than a person. (The most chilling picture I have seen of this is in [18].) To live indefinitely long, the mind itself must grow ... and when it becomes great enough, and looks back ... what fellow-feeling can it have with the soul that it was originally? Certainly the later being would be everything the original was, but so much vastly more. And so even for the individual, the Cairns-Smith or Lynn Margulis notion of new life growing incrementally out of the old must still be valid.

This "problem" about immortality comes up in much more direct ways. The notion of ego and self-awareness has been the bedrock of the hardheaded rationalism of the last few centuries. Yet now the notion of self-awareness is under attack from the Artificial Intelligence people ("self-awareness and other delusions"). Intelligence Amplification undercuts our concept of ego from another direction. The post-Singularity world will involve extremely high-bandwidth networking. A central feature of strongly superhuman entities will likely be their ability to communicate at variable bandwidths, including ones far higher than speech or written messages. What happens when pieces of ego can be copied and merged, when the size of a selfawareness can grow or shrink to fit the nature of the problems under consideration? These are essential features of strong superhumanity and the Singularity. Thinking about them, one begins to feel how essentially strange and different the Post-Human era will be -- _no matter how cleverly and benignly it is brought to be_.

From one angle, the vision fits many of our happiest dreams: a time unending, where we can truly know one another and understand the deepest mysteries. From another angle, it's a lot like the worst- case scenario I imagined earlier in this paper.

Which is the valid viewpoint? In fact, I think the new era is simply too different to fit into the classical frame of good and evil. That frame is based on the idea of isolated, immutable minds connected by tenuous, low-bandwith links. But the post-Singularity world _does_ fit with the larger tradition of change and cooperation that started long ago (perhaps even before the rise of biological life). I think there _are_ notions of ethics that would apply in such an era. Research into IA and high-bandwidth communications should improve this understanding. I see just the glimmerings of this now [32]. There is Good's Meta-Golden Rule; perhaps there are rules for distinguishing self from others on the basis of bandwidth of connection. And while mind and self will be vastly more labile than in the past, much of what we value (knowledge, memory, thought) need never be lost. I think Freeman Dyson has it right when he says [9]: "God is what mind becomes when it has passed beyond the scale of our comprehension."

[I wish to thank John Carroll of San Diego State University and Howard Davidson of Sun Microsystems for discussing the draft version of this paper with me.]

Annotated Sources [and an occasional plea for bibliographical help]

[1] Alfve'n, Hannes, writing as Olof Johanneson, _The End of Man?_, Award Books, 1969 earlier published as "The Tale of the Big Computer", Coward-McCann, translated from a book copyright 1966 Albert Bonniers Forlag AB with English translation copyright 1966 by Victor Gollanz, Ltd.

[2] Anderson, Poul, "Kings Who Die", _If_, March 1962, p8-36. Reprinted in _Seven Conquests_, Poul Anderson, MacMillan Co., 1969.

[3] Asimov, Isaac, "Runaround", _Astounding Science Fiction_, March 1942, p94. Reprinted in _Robot Visions_, Isaac Asimov, ROC, 1990. Asimov describes the development of his robotics stories in this

book.

[4] Barrow, John D. and Frank J. Tipler, _The Anthropic Cosmological Principle_, Oxford University Press, 1986.

[5] Bear, Greg, "Blood Music", _Analog Science Fiction-Science Fact_, June, 1983. Expanded into the novel _Blood Music_, Morrow, 1985.

[6] Cairns-Smith, A. G., _Seven Clues to the Origin of Life_, Cambridge University Press, 1985.

[7] Conrad, Michael _et al._, "Towards an Artificial Brain", _BioSystems_, vol 23, pp175-218, 1989.

[8] Drexler, K. Eric, _Engines of Creation_, Anchor Press/Doubleday, 1986.

[9] Dyson, Freeman, _Infinite in All Directions_, Harper && Row, 1988.

[10] Dyson, Freeman, "Physics and Biology in an Open Universe", _Review of Modern Physics_, vol 51, pp447-460, 1979.

[11] Good, I. J., "Speculations Concerning the First Ultraintelligent Machine", in _Advances in Computers_, vol 6, Franz L. Alt and Morris Rubinoff, eds, pp31-88, 1965, Academic Press.

[12] Good, I. J., [Help! I can't find the source of Good's Meta-Golden Rule, though I have the clear recollection of hearing about it sometime in the 1960s. Through the help of the net, I have found pointers to a number of related items. G. Harry Stine and Andrew Haley have written about metalaw as it might relate to extraterrestrials: G. Harry Stine, "How to Get along with Extraterrestrials ... or Your Neighbor", _Analog Science Fact- Science Fiction_, February, 1980, p39-47.]

[13] Herbert, Frank, _Dune_, Berkley Books, 1985. However, this novel was serialized in _Analog Science Fiction-Science Fact_ in the 1960s.

[14] Kovacs, G. T. A. _et al._, "Regeneration Microelectrode Array for Peripheral Nerve Recording and Stimulation", _IEEE Transactions on Biomedical Engineering_, v 39, n 9, pp 893-902.

[15] Margulis, Lynn and Dorion Sagan, _Microcosmos, Four Billion Years of Evolution from Our Microbial Ancestors_, Summit Books, 1986.

[16] Minsky, Marvin, _Society of Mind_, Simon and Schuster, 1985.

[17] Moravec, Hans, _Mind Children_, Harvard University Press, 1988.

[18] Niven, Larry, "The Ethics of Madness", _If_, April 1967, pp82-108. Reprinted in _Neutron Star_, Larry Niven, Ballantine Books, 1968.

[19] Penrose, Roger, _The Emperor's New Mind_, Oxford University Press, 1989.

[20] Platt, Charles, Private Communication.

[21] Rasmussen, S. _et al._, "Computational Connectionism within Neurons: a Model of Cytoskeletal Automata Subserving Neural Networks", in _Emergent Computation_, Stephanie Forrest, ed., pp428-449, MIT Press, 1991.

[22] Searle, John R., "Minds, Brains, and Programs", in _The Behavioral and Brain Sciences_, vol 3, Cambridge University Press, 1980. The essay is reprinted in _The Mind's I_, edited by Douglas R. Hofstadter and Daniel C. Dennett, Basic Books, 1981 (my source for this reference). This reprinting contains an excellent critique of the Searle essay.

[23] Sims, Karl, "Interactive Evolution of Dynamical Systems", Thinking Machines Corporation, Technical Report Series (published in _Toward a Practice of Autonomous Systems: Proceedings of the First European Conference on Artificial Life_, Paris, MIT Press, December 1991.

[24] Stapledon, Olaf, _The Starmaker_, Berkley Books, 1961 (but from the date on forward, probably written before 1937).

[25] Stent, Gunther S., _The Coming of the Golden Age: A View of the End of Progress_, The Natural History Press, 1969.

[26] Swanwick Michael, _Vacuum Flowers_, serialized in _Isaac Asimov's Science Fiction Magazine_, December(?) 1986 - February 1987. Republished by Ace Books, 1988.

[27] Thearling, Kurt, "How We Will Build a Machine that Thinks", a workshop at Thinking Machines Corporation, August 24-26, 1992. Personal Communication.

[28] Ulam, S., Tribute to John von Neumann, _Bulletin of the American Mathematical Society_, vol 64, nr 3, part 2, May 1958, pp1-49.

[29] Vinge, Vernor, "Bookworm, Run!", _Analog_, March 1966, pp8-40. Reprinted in _True Names and Other Dangers_, Vernor Vinge, Baen Books, 1987.

[30] Vinge, Vernor, "True Names", _Binary Star Number 5_, Dell, 1981. Reprinted in _True Names and Other Dangers_, Vernor Vinge, Baen Books, 1987.

[31] Vinge, Vernor, First Word, _Omni_, January 1983, p10.

[32] Vinge, Vernor, To Appear [ :-) ].

Source: http://mindstalk.net/vinge/vinge-sing.html

New JavaScript Engine Module Owner

As you may know, I wrote JavaScript in ten days. JS was born under the shadow of Java, and in spite of support by marca and Bill Joy, JS in 1995 was essentially a one-man show.

I had a bit of help, even at the start, that I’d like to acknowledge again. Ken Smith, a Netscape acquiree from Borland, ported JDK 1.0-era java.util.Date (we both just drafted off of the Java truck, per management orders; we did not demur from the Y2K bugs in that Java class). My thanks also to Netscape 2’s front-end hackers, chouck, atotic, and garrett for their support. EDIT: can’t forget spence on the X front end!

That was 1995. Engine prototype took ten days in May. Bytecode compiler and interpreter from the start, because Netscape had a server-side JS product in the works. The rest of the year was browser integration, mainly what became known as “DOM level 0”. Only now standardized in HTML 5 and Anne’s wg. Sentence fragments here show my PTSD from that sprint :-/.

In 1996 I finally received some needed assistance from RRJ, who helped port David M. Gay and Guy Steele’s dtoa.c and fix date/time bugs.

Also in summer 1996, nix interned at Netscape while a grad student at CMU, and wrote the first LiveConnect. I am still grateful for his generous contributions in wide-ranging design discussions and code-level interactions.

At some point in late summer or early fall 1996, it became clear to me that JS was going to be standardized. Bill Gates was bitching about us changing JS all the time (some truth to it; but hello! Pot meet Kettle…). We had a standards guru, Carl Cargill, who knew Jan van den Beld, then the Secretary-General of ECMA (now Ecma). Carl steered our standardization of JS to ECMA.

Joining ECMA and participating in the first JS standards meeting was an eye-opener. Microsoft sent a B-team, and Borland and a company called NOMBAS also attended. “Success has many fathers” was the theme. The NOMBAS founder greeted me by saying “oh, we’ve been doing JavaScript for *years*”. I did not see how that could be the case, unless JS meant any scripting language with C-based syntax. I had not heard of NOMBAS before then.

At that first meeting, I think I did well enough in meta-debate against the Microsoft team that they sent their A-team to the next meeting. This was all to the good, and Microsoft in full-blooded compete mode, but also with individual initiative beyond the call of corporate duty by Shon Katzenberger, materially helped create ES1. Sun contributed Guy Steele, who is composed of pure awesome. Guy even brought RPG for fun to a few meetings (Richard contributed ES1 Clause 4).

Meanwhile, in fall 1996, I was under some pressure from Netscape management to write a proto-spec for JS, but that was not something I could do while also maintaining the “Mocha” engine all by myself in both shipping and future Netscape releases, along with all of the DOM code.

This was a ton of work, and on top of it I had to pay off substantial technical debt that I had willingly taken on in the first year. So I actually stayed home for two weeks to rewrite Mocha as the codebase that became known as SpiderMonkey, mainly to get it done (no other way), also to go on a bit of a strike against the Netscape management team that was still underinvesting in JS. This entailed garbage collection and tagged values instead of slower reference-counting and fat discriminated union values.

Also in fall 1996, chouck decided to join me as the second full-time JS team-mate. He and I did some work targeting the (ultimately ill-fated) Netscape 4 release. This work was ahead of its time. We put the JS engine in a separate thread from the “main thread” in Netscape (still in Mozilla). This allowed us to better overlap JS and HTML/CSS/image computations, years ahead of multicore. You could run an iloop in JS and the “slow script dialog” seamlessly floated above it, allowing you to stop the loop or permit it to continue.

After summer 1996 and the start of ECMA-262 standardization, Netscape finally invested more in JS. Clayton Lewis joined as manager, and hired Norris Boyd, who ended up creating Rhino from SpiderMonkey’s DNA transcoded to Java. This was ostensibly because Netscape was investing in Java on the server, in particular in an AppServer that wanted JS scripting.

I met shaver for the first time in October 1996 at Netscape’s NY-based Developer Conference, where he nimbly nerd-blocked some Netscape plugin API fanboys and saved me from having to digress from the main thing, which was increasingly JS.

Some time in 1997, shaver contributed “LiveConnect 2”, based on more mature Java reflection APIs not available to nix in 1996. Clayton hired shaver and the JS team grew large by end of 1997, when I decided to take a break from JavaScript (having delivered ES1 and ES2) and join the nascent mozilla.org.

I handed the keys to the JS kingdom to Waldemar Horwat, now of Google, in late 1997. Waldemar did much of the work on ES3, and threw his considerable intellect into JS2/ES4 afterwards, but without overcoming the market power and stalling tactics of Microsoft.

True story: Waldemar’s Microsoft nemesis on TC39 back then, at the time a static language fan who hated JS, has come around and now endorses JS and dynamic languages.

Throughout all of this, I maintained module ownership of SpiderMonkey.

Fast-forward to 2008. After a great (at the time) Firefox 3 release where @shaver and I donned the aging super-hero suits one more time to compete successfully on interpreter performance against JavaScriptCore in WebKit, Andreas Gal joined us for the summer in which we hacked TraceMonkey, which we launched ahead of Chrome and V8.

A note on V8: I’d learned of it in 2006, when I believe it was just starting. At that point there was talk about open-sourcing it, and I welcomed the idea, encouraging any of: hosting on code.google.com, hosting without any pressure to integrate into Firefox on mozilla.org (just like Rhino), or hosting with an integration plan to replace SpiderMonkey in Firefox. I had to disclose that another company was about to release their derived-from-JS engine to Mozilla, but my words included “the more the merrier”. It was early days as far as JS JITs were concerned.

V8 never open-sourced in 2006, and stealthed its way to release in September 2008. This may have been a prudent move by Google to avoid exciting Microsoft. Clearly, in 1995, the “Netscape + Java kills Windows” talk from Netscape antagonized Microsoft. I have it on good authority that a Microsoft board member wrote marca at the end of 1995 warning “you’ve waved the cape in the bull’s face — prepare to get the horns!” One could argue that Chrome in 2008 was the new red cape in the bull’s face, which begot IE9 and Chakra.

Whatever Google’s reasoning, keeping V8 closed-source for over two years hurt JS in this sense: it meant Apple and Mozilla had to climb the JIT learning curves on their own (at first; then finally with the benefit of being able to inspect V8 sources). Sure, the Anamorphic work on Self and Smalltalk was somewhat documented, and I had learned it in the ’90s, in part with a stint on loan from Netscape to Sun when they were doing due dliigence in preparation for acquiring Anamorphic. But the opportunity to build on a common engine codebase was lost to path dependence.

On the upside, different competing open source engines have demonstrably explored a larger design space than one engine codebase could under consolidated management.

In any event, the roads not taken in JS’s past still give me pause, because similar roads lie ahead. But the past is done, and once we had launched TraceMonkey, and Apple had launched SquirrelFish Extreme, the world had multiple proofs along with the V8 release that JS was no longer consigned to be “slow” or “a toy”, as one referee dismissed it in rejecting a PLDI submission from Andreas in 2006.

You know the rest: JS performance has grown an order of magnitude over the last several years. Indeed, JS still has upside undreamed of in the Java world where 1% performance win is remarkable. And, we are still at an early stage in studying web workloads, in order to synthesize credible benchmarks. On top of all this, the web is still evolving rapidly, so there are no stable workloads as far as I can tell.

Around the time TraceMonkey launched, Mozilla was lucky enough to hire Dave Mandelin, fresh from PhD work at UCB under Ras Bodik.

The distributed, open source Mozilla JS team delivered the goods in Firefox 4, and credit goes to all the contributors. I single Dave out here because of his technical and personal leadership skills. Dave is even-tempered, super-smart, and a true empirical/skeptical scientist in the spirit of my hero, Richard Feynman.

So it is with gratitude and more than a bit of relief, after a very long 16 years in full, 13 years open source, that I’m announcing the transfer of SpiderMonkey’s module ownership to @dmandelin.

Hail to the king, baby!

Source: https://brendaneich.com/2011/06/new-javascript-engine-module-owner/