Mission

Our blog is about the impact on human employment of roboticsMachine Intelligence, and possible future Machine Sapience.

  1. We believe it is possible that robotics and “thinking” machines will eventually take all human jobs.
  2. We believe Machine Intelligence will be capable of taking over many more human occupations and jobs than other futurists and forecasters are predicting and faster than they are predicting.
  3. We believe there are limits to Machine Intelligence’s ability to displace human jobs that rely on complex choices. Sapience enables people to apply wisdomjudgement, and opinion to real-world systems that are at best partially understood.
  4. We believe Machine Sapience requires our machines to make incorrect predictions. It requires humans to allow machines to be wrong if machines are to learn and to determine their own path. And we believe we have an idea about how to design sapient machines
  5. We believe machine sapience cannot result from emergent behavior within currently deployed and even currently planned computer systems; it will not happen unless humans make it happen. Although widely-accepted tests for intelligence exist, no such tests yet exist for wisdom, judgement, and opinion.

We plan to steer a rational, thoughtful, and minimally subjective course somewhere between the extremes of abundance theory and an evil killer robot apocalypse.


What does “Imitating Machines” have to do with anything?

Historically, human societies and organizations have expanded by dividing human occupations and jobs into increasingly specialized roles. This functional decomposition of work started several millennia ago--well before expressions like “I am just a cog in the machine” reflected the increasing sophistication of machines during the Industrial Revolution. The title of this blog, “Imitating Machines”, is a nod to people performing work that can potentially be automated. Within organizations, the things that people are imitating are machines.

Jobs can be automated (displaced) as machines are invented with 1. the physical ability to do a job and 2. any necessary learning that accompanies that job. As new technology enters the workforce, creation of new jobs to offset the lost occupations (replacement) has typically lagged by a few decades. This lag time has incited social unrest in the past, along with actual political revolution, and it is likely to continue to do so in the future.

Machine Intelligence has become good enough at identifying increasingly-sophisticated patterns. It also has become good enough at following increasingly-sophisticated rule sets. And it is improving at both. From this perspective, “Imitating Machines” is also a nod to machines that can imitate increasingly-sophisticated human behaviors well enough to displace more sophisticated jobs. The things that machines are imitating are individual people.

We believe it is possible that robotics and “thinking” machines will eventually take all human jobs.

To do so means reframing the components of thinking: intelligence and sapience.

However, the definitions of both intelligence and sapience are somewhat vague. There is frustratingly little consensus on what they really mean.

Most of us have a subjective definition of intelligence. For this blog, intelligence is the ability to identify general and “complex” patterns, and then to use those patterns to accurately predict what will happen next. We will discuss intelligence and complexity in more objective terms to better define what we mean by Machine Intelligence and its ability to displace human jobs.

Subjective words like “wisdom”, “judgement”, and “understanding” are used to define sapience, which makes the concept of sapience ambiguous. For this blog, sapience is the ability to create a preference or opinion when choosing an action, where the outcome of any possible choice is uncertain. Machine Sapience should be possible,  but it has not been demonstrated yet. This blog will speculate about how to enable it.

Sapience is widely confused with sentience in popular literature. Sentience refers to the concept of self-awareness, which is even more ambiguous than pinning down what makes wisdom and judgement work. This blog ignores the philosophical challenge of whether self-awareness is required for animals, humans, or machines to develop intelligence or judgement. To simplify our discussion of whether machines might ever be sapient, we assume that sapience and sentience are separate qualities that do not affect each other.

We believe Machine Intelligence will be capable of taking over many more human occupations and jobs than other futurists and forecasters are predicting and faster than they are predicting.

Intelligent machines can already learn from their environment and make predictions based on available data, and they are getting better fast. Sapient machines, after humanity figures out how to build them, will be able to choose their own path.

We believe there are limits to Machine Intelligence’s ability to displace human jobs that rely on complex choices. Sapience enables people to apply wisdomjudgement, and opinion to real-world systems that are at best partially understood.

For many people, there will be no acceptable backup for their lost occupation. Therefore, the worldwide potential for social unrest will be very high, until humans collectively figure out what kind of economy lies on the other side of what we’re calling the Great Displacement.

We believe Machine Sapience requires our machines to make incorrect predictions. It requires humans to allow machines to be wrong if machines are to learn and to determine their own path. And we believe we have an idea about how to design sapient machines

These Machine Intelligence limits are due largely to an understandable pro-human bias on the part of Machine Intelligence researchers. Most humans mistakenly believe that they are better at applying pattern recognition than humans actually are. And so Machine Intelligence researchers are designing Machine Intelligence for increased intelligence (pattern matching accuracy and precision) instead of sapience (the “executive” cognitive functions of planning and judgement). Machine Sapience addresses cognitive “executive functions” that are required to displace the few remaining truly human jobs. We believe that Machine Intelligence will not evolve into Machine Sapience through emergent behavior alone.

We believe Machine Sapience cannot be the result of emergent behavior; it will not happen unless humans make it happen. Although widely-accepted tests for intelligence exist, no such tests yet exist for wisdom, judgement, and opinion.

Yes, that means being wrong occasionally. Most people consider it to be one of humanity’s highest qualities and definitely not what people expect from computers. We (the authors) consider “often being wrong” to be a defining and enriching aspect of humanity. Learning from a sub-optimal (wrong) choice is a fundamental precondition for human-style deep learning.

We plan to steer a rational, thoughtful and minimally subjective course between the extremes of abundance theory and an evil killer robot apocalypse. We call this middle ground future the "Great Displacement" in reference to the huge percent of human jobs at stake.

Abundance and Singularity

The Great Displacement

(what we are writing about)

Robot Apocalypse, Judgement Day, etc.

Narrative

Technology provides for all and MI serves humanity*

MI displaces human occupations, perhaps more profoundly than previous technologies

MI tries to kill humanity or enslave us for our electrical potential

Proponents

Ray KurzweilDiamandis & KotlerBrynjolfsson & McAfee

Andrew NgIsaac AsimovPhilip K. Dick

Elon MuskBill GatesSteven Hawking, and a host of others (here, too)

Genesis

Emergent behavior from increasingly complex systems

Intentional design, because code does not mutate like biological systems**

Emergent behavior from increasingly complex systems or chips embedded in robots sent from the future

Anthropomorphism

MI will think like humans

Humans still don’t know how humans think, and it most likely won’t matter if MI “thinks” like us or not

MI will think like humans

Sentience

MI will like humans, because humans like humans, so why not?

Humans do not know how to design self-aware MI because humans really don’t know how or why they are self-aware

MI will hate humans, as they will believe humans pose an existential threat to machines (hmmm… perhaps some do...)

Sapience

MI will be benevolent and want to help (maybe they’ll want to help too much and limit our activities?)

Humans do not know how to design MI that might demonstrate intentional behavior or “free will” ***

MI will be malevolent and want to kill all humans (thereby limiting all of our activities)

Displace Human Work

Yes, so humans can all upload, upgrade, or otherwise go on holiday

Yes, because of economic competitiveness

Yes, because there will be no humans left

Displaced Jobs Return to Humans

No, not unless humans want to do the work, but that seems so mundane

No, once a job has been automated it stays with the machines; automated jobs are not fungible, as they are are with globalization

No, because there will be no humans left

New Occupations Created

Yes, it is creatively called the “Creative Economy”

Uncertain. In previous waves job losses have historically lagged new occupation creation, will that pattern continue?

No, because there will be no humans left

* No, not like in that episode. Machines don’t need to “eat” anything, they prefer to directly use raw, distilled power, and materials.

** Genetic algorithms are used to serendipitously explore options to find better patterns, but they do not genetically evolve the underlying ability of a computer to solve radically different classes of problems. It seems unlikely that MI can mutate due to code bugs and self-modification, using current and projected computer hardware, within a usefully short number of millennia.

*** We (Paul and Sam) have some thoughts about next steps for designing MI that might “want” to do anything… to exhibit a will of its own.

  • Current rating: 5

Comments

Please refer to our Privacy Policy and Posting Guidelines before commenting
  • assignment writing 1 year, 9 months ago

    Basically improving the mission impossibles the machine observe the emergent behaviour always. I can say everything this imitatingmachines blog for the more wisdom subject dividing human occupancy,keep it up.

    Link Reply
    • Currently unrated
  • <a href="http://www.cityofnezperce.com">Mr Hami</a 1 year, 9 months ago

    <a href="http://www.cityofnezperce.com">Mr Hami</a> like this post.

    Link Reply
    • Currently unrated
  • Mr Hami 1 year, 9 months ago

    azfsdfsadasdf

    Link Reply
    • Currently unrated
  • Duncan Konkol 1 year, 1 month ago

    It is interesting to read your blog post and I am going to share it with my friends.

    Link Reply
    • Currently unrated

New Comment

Please refer to our Privacy Policy and Posting Guidelines before commenting
required
required (not published)
optional