A Different Vision from 'Singularity'

A Different Vision from 'Singularity'


Ray Kurzweil, one of Silicon Valley’s favorite nutcases, popularized the concept of technological singularity in his book “The Singularity Is Near: When Humans Transcend Biology”.

The book builds on the ideas introduced in Kurzweil’s previous books, The Age of Intelligent Machines (1990) and The Age of Spiritual Machines (1999). This time, however, Kurzweil embraces the term the Singularity, which was popularized by Vernor Vinge in his 1993 essay “The Coming Technological Singularity” more than a decade earlier.[1] The first known use of the term in this context was made in 1958 by the Hungarian born mathematician and physicist John von Neumann.

Kurzweil describes his law of accelerating returns which predicts an exponential increase in technologies like computers, genetics, nanotechnology, robotics and artificial intelligence. He says this will lead to a technological singularity in the year 2045, a point where progress is so rapid it outstrips humans’ ability to comprehend it.

With this book and previous ones, he has now become the spiritual (religious) leader of the singularity church. The followers have their yearly summit named ‘Singularity summit’.

The Singularity Summit is the annual conference of the Machine Intelligence Research Institute. It was started in 2006 at Stanford University[1][2] by Ray Kurzweil, Eliezer Yudkowsky, and Peter Thiel, and the subsequent summits in 2007, 2008, 2009, 2010, and 2011 have been held in San Francisco, San Jose, New York, San Francisco, and New York, respectively. Some speakers have included Sebastian Thrun, Sam Adams, Rodney Brooks, Barney Pell, Marshall Brain, Justin Rattner, Peter Diamandis, Stephen Wolfram, Gregory Benford, Robin Hanson, Anders Sandberg, Juergen Schmidhuber, Aubrey de Grey, Max Tegmark, and Michael Shermer.

There have also been spinoff conferences in Melbourne, Australia in 2010, 2011 and 2012. Previous speakers include David Chalmers, Lawrence Krauss, Gregory Benford, Ben Goertzel, Steve Omohundro, Hugo de Garis, Marcus Hutter, Mark Pesce, Stelarc and Randal A. Koene.

Kurzweil got hired by Google, a company now run by a former classmate of mine. Many of my other classmates got hired by Google, and the rest are mostly scattered around various tech-heavy companies in the valley. They may not enjoy the heretical message that their concept of singularity is a gigantic illusion just like all other religious concepts.

The biggest question about ‘technological singularity’ is who will pay to maintain the technology. The US boom over the last 20 years had been funded by the pensions of baby boomers, who sponsored the initial cost. The pension money kept on coming due to demographic reasons, and this gave the companies building technology (primarily around Silicon Valley) the illusion of exponential growth. Given that both the buildup and maintenance of contemporary technologies are extremely capital intensive, it is expected that the infrastructure will collapse without continuous money pouring into the system. A good example is shown below -

Following the same argument, here is a ‘non-singularity’ vision of future.

The Death of the Internet: A Pre- Mortem

All this has been on my mind of late as Ive considered the future of the internet. The comparison may seem far-fetched, but then thats what supporters of the SST would have said if anyone had compared the Boeing 2707 to, say, the zeppelin, another wave of the future that turned out to make too little economic sense to matter. Granted, the internet isnt a subsidy dumpster, and its also much more complex than the SST; if anything, it might be compared to the entire system of commercial air travel, which we still have with us or the moment. Nonetheless, a strong case can be made that the internet, like the SST, doesnt actually make economic sense; its being propped up by a set of financial gimmickry with a distinct resemblance to smoke and mirrors; and when those go awayand they willmuch of what makes the internet so central a part of pop culture will go away as well.

Its probably necessary to repeat here that the reasons for this are economic, not technical. Every time Ive discussed the hard economic realities that make the internets lifespan in the deindustrial age roughly that of a snowball in Beelzebubs back yard, Ive gotten a flurry of responses fixating on purely technical issues. Those issues are beside the point. No doubt it would be possible to make something like the internet technically feasible in a society on the far side of the Long Descent, but that doesnt matter; what matters is that the internet has to cover its operating costs, and it also has to compete with other ways of doing the things that the internet currently does.

Its a source of wry amusement to me that so many people seem to have forgotten that the internet doesnt actually do very much thats new. Long before the internet, people were reading the news, publishing essays and stories, navigating through unfamiliar neighborhoods, sharing photos of kittens with their friends, ordering products from faraway stores for home delivery, looking at pictures of people with their clothes off, sending anonymous hate- filled messages to unsuspecting recipients, and doing pretty much everything else that they do on the internet today. For the moment, doing these things on the internet is cheaper and more convenient than the alternatives, and thats what makes the internet so popular. If that changesif the internet becomes more costly and less convenient than other optionsits current popularity is unlikely to last.

Lets start by looking at the costs. Every time Ive mentioned the future of the internet on this blog, Ive gotten comments and emails from readers who think that the price of their monthly internet service is a reasonable measure of the cost of the internet as a whole. For a useful corrective to this delusion, talk to people who work in data centers. Youll hear about trucks pulling up to the loading dock every single day to offload pallet after pallet of brand new hard drives and other components, to replace those that will burn out that same day. Youll hear about power bills that would easily cover the electricity costs of a small city. Youll hear about many other costs as well. Data centers are not cheap to run, there are many thousands of them, and theyre only one part of the vast infrastructure we call the internet: by many measures, the most gargantuan technological project in the history of our species.

Your monthly fee for internet service covers only a small portion of what the internet costs. Where does the rest come from? That depends on which part of the net were discussing. The basic structure is paid for by internet service providers (ISPs), who recoup part of the costs from your monthly fee, part from the much larger fees paid by big users, and part by advertising. Content providers use some mix of advertising, pay-to-play service fees, sales of goods and services, packaging and selling your personal data to advertisers and government agencies, and new money from investors and loans to meet their costs. The ISPs routinely make a modest profit on the deal, but many of the content providers do not. Amazon may be the biggest retailer on the planet, for example, and its cash flow has soared in recent years, but its expenses have risen just as fast, and it rarely makes a profit. Many other content provider firms, including fish as big as Twitter, rack up big losses year after year.

How do they stay in business? A combination of vast amounts of investment money and ultracheap debt. Thats very common in the early decades of a new industry, though its been made a good deal easier by the Feds policy of next- to-zero interest rates. Investors who dream of buying stock in the next Microsoft provide venture capital for internet startups, banks provide lines of credit for existing firms, the stock and bond markets snap up paper of various kinds churned out by internet businesses, and all that money goes to pay the bills. Its a reasonable gamble for the investors; they know perfectly well that a great many of the firms theyre funding will go belly up within a few years, but the few that dont will either be bought up at inflated prices by one of the big dogs of the online world, or will figure out how to make money and then become big dogs themselves.

Are you preparing for the future?

-————————————————————-

P.S.

One big factor of maintenance cost of the internet that Mr. Greer often highlights is regarding the disk drives. If you have not worked on the hardware side of a data center, you may appreciate the real capital costs of running the servers 24x7 from the following post.

On Disk Failure

And we never failed to fail

It was the easiest thing to do

Stephen Stills, Rick and Michael Curtis; Southern Cross (1981)

With Brian Beachs article on disk drive failure continuing to stir up popular press and criticism, Id like to discuss a much-overlooked facet of disk drive failure. Namely, the failure itself. Ignoring for a moment whether Beachs analysis is any good or the sample populations even meaningful, the real highlight for me from the above back-and-forth was this comment from Brian Wilson, CTO of BackBlaze, in response to a comment on Mr. Beachs article:

Replacing one drive takes about 15 minutes of work. If we have 30,000 drives and 2 percent fail, it takes 150 hours to replace those. In other words, one employee for one month of 8 hour days. Getting the failure rate down to 1 percent means you save 2 weeks of employee salary maybe $5,000 total? The 30,000 drives costs you $4 million.

The $5k/$4million means the Hitachis are worth 1/10th of 1 percent higher cost to us. ACTUALLY we pay even more than that for them, but not more than a few dollars per drive (maybe 2 or 3 percent more).

Moral of the story: design for failure and buy the cheapest components you can. :-)

He later went on to disclaim in a followup comment, after being rightly taken to task by other commenters for, among other things, ignoring the effect of higher component failure rates on overall durability, that his observation applies only to his companys application. That rang hollow for me. Heres why.

The two modern papers on disk reliability that are more or less required reading for anyone in the field are the CMU paper by Schroeder and Gibson and the Google paper by Pinheiro, Weber, and Barroso. Both repeatedly emphasise the difficulty of assessing failure and the countless ways that devices can fail. Both settle on the same metric for failure: if the operator decided to replace the disk, it failed. If youre looking for a stark damnation of a technology stack, you wont find a much better example than that: the only really meaningful way we have to assess failure is the decision a human made after reviewing the data (often a polite way of saying groggily reading a pager at 3am or receiving a call from a furious customer). Everyone who has actually worked for any length of time for a manufacturer or large-scale consumer of disk-based storage systems knows all of this; it may not make for polite cocktail party conversation, but its no secret. And that, much more than any methodological issues with Mr. Beachs work, casts doubt on Mr. Wilsons approach. Even ignoring for a moment the overall reduction in durability that unreliable components creates in a system, some but not all of which can be mitigated by increasing the spare and parity device counts at increased cost, the assertion that the cost of dealing with a disk drive failure that does not induce permanent data loss is the cost of 15 minutes of one employees time is indefensible. True, it may take only 15 minutes for a data centre technician armed with a box of new disk drives and a list of locations of known-faulty components to wander the data centre verifying that each has its fault LED helpfully lit, replacing each one, and moving on to the next, but thats hardly the whole story.

“That’s just what I wanted you to think, with your soft, human brain!”

Given the failure metric weve chosen out of necessity, it seems like we need to account for quite a bit of additional cost. After all, someone had to assemble the list of faulty devices and their exact locations (or cause their fault indicators to be activated, or both). Replacements had to be ordered, change control meetings held, inventories updated, and all the other bureaucratic accoutrements made up and filed in triplicate. The largest operators and their supply chain partners have usually automated some portion of this, but thats really beside the point: however its done, it costs money thats not accounted for in the delightfully naive 15-minute model of data centre operations. Last, but certainly not least, we need to consider the implied cost of risk. But the most interesting part of the problem, at least to me personally, is the first step in the process: identifying failure. Just what does that entail, and what does it cost?



Written by M. //