Coexistence of Humans & AI

Coexistence of Humans & AI


This video is sponsored by CuriosityStream. Get access to my streaming video service,
Nebula, when you sign up for CuriosityStream using
the link in the descripti In our era, we are very concerned about how
the rise of Artificial Intelligence will affect our lives and society. But could there come a point where we will
have to care about how our actions affect them? One of the most exciting and possibly troubling
areas of development in the Computer Age is the rise of Artificial Intelligence. Can humans make something as smart or smarter
than ourselves? And if we do, how do we keep it from wiping
us out? Or worse, from turning us into a disenfranchised
minority in a two-species civilization that we started? What happens when our tools want to be treated
with respect and allowed to make decisions of their own? This inevitably brings up notions of controls,
safeguards, and overrides we might build into AIs. But those avenues also inevitably bring up
concerns about ethics. The more intelligent AI become, the more we
worry about keeping control, and the more like slavery the whole arrangement becomes. That, along with the literal or existential
threat they represent to humans, leads some to think of AI as a sort of Pandora’s box
that should be put aside and never opened. But is this caution truly necessary, should
we set aside the potential benefits that AI presents? Let’s start that discussion with a look
at the safeguards we might develop. Isaac Asimov’s Three Laws of Robotics is
a good starting point. These basically require robots to not hurt
humans or let us be hurt, to obey our orders, and to protect themselves. The Three Laws are in order of priority, so
for example a robot can disobey a human’s order to harm a human, and it can let itself
get hurt in order to obey an order or protect someone. That seems like a smart ordering to me, and
there’s an excellent XKCD comic that examines the consequences of all six possible orderings. But a key idea in discussing Laws for AI that
doesn’t seem to get examined much in scifi is exactly how you’d enforce the laws. If a dumb machine is just running human-written
instructions, it’s easy to tell it want to do or not do. In fact what makes the machine dumb but useful
is that it does precisely what you program, nothing more or less. But AIs will be making judgements based on
their life experience and training data. Even if they think very differently than us,
they’ll think more like us than like dumb machines. And that means getting them to obey a law
will be a lot like getting humans to obey a law—in other words not always successful. So taking a look at how we get humans to mostly
obey laws might give a clearer idea of what it will take for AIs. Humans actually do come pre-programmed by
nature with certain safeguards. We feel instinctual aversions to certain deeds,
like murder and theft from ours peers, and to people who do them. There’s many types of mammals for who it
is rare for interactions by members of the same species to end with one intentionally
killing the other, resulting in options besides just fight or flight, often interacting instead
to dominate or submit to another on their social hierarchy. There’s an instinctive inhibition there,
because killing your species is bad for your species. Since we come equipped with such an instinct
for survival of the species, that’s a pretty strong endorsement for it being practical,
ethical, and reasonable to design into our own creations. Of course you wouldn’t want to give them
an instinct to preserve simply their own species, you either want an instinct to preserve our
species too or to regard themselves as part of our species. But there’s still the puzzle of how to program
in such an instinct. Since we have one, it’s presumably doable,
but we don’t currently have a very clear idea of how. We can train an AI to recognize if the object
it sees is a dog or not, and likewise, we could in theory train a more advanced AI to
correctly recognize that its actions would be deemed Good or Bad, Okay or Forbidden by
its creators. But that’s not the same as getting it to
act upon that assessment the way we’d want it to. Of course our instinct isn’t a 100% safeguard,
or anywhere near that, and while we don’t need 100% I suspect we’d like an inhibition
that lowered the odds even more than it does in humans, especially for anything trusted
with much power. We screen humans before giving them the keys
to the vault, so to speak, and this might be far easier and more effective with a created
mind where you can peek around inside and see what is actually going on it’s thinking. Thin ice though, if taken too far, I wouldn’t
want my brain scanned and we only do things like lie detectors voluntarily, a machine
could obviously be forced to let us look in its brain but it might come to resent that. Alternatively, we could install a “kill
switch” that would sense a forbidden activity and shut down or disable the AI. Or, it might activate a protocol for some
other action like returning to the factory, but you might not trust it to do that if it’s
already misbehaving. But however it specifically works, this amounts
to equipping the AI with its own internal policeman, not actually making it obey the
law. We already do something like this to people
with aversion therapies, the most famous fictional depiction of which was in the movie A Clockwork
Orange. But the protagonist in that story was not
turned into a gentle soul who abhorred violence, he was just conditioned to feel so physically
ill around violence that he couldn’t engage in it, not even to defend himself. Something like that might be a good failsafe,
but you only invoke failsafes when something has failed pretty badly. Most human beings have never actually killed
a human, and rarely seriously contemplate it, often finding the concept truly repulsive
when taken beyond the theoretical. That’s how deep the natural inhibition is
programmed into us. And presumably that’s how we’d want to
program our AIs, just disinclined to consider harming us—to not ever reach the point of
considering it a great option if they could only defeat that darned kill switch. As a last resort, we do get some humans to
obey laws only by promising painful consequences if they don’t. This works on risk-averse people, but it turns
others into more determined sneakier outlaws—and it might very well do the same to disobedient
AIs who we punished. This also raises the issue of exactly what
an AI would consider painful that could be used to punish it. The T-800 from Terminator 2 said during surgery
that it can sense injuries, and that data could be called pain. But we probably wouldn’t call it pain in
that instance, because it was pretty clear that the T-800 wasn’t particularly bothered
by it, because it didn’t ever do much to avoid injuries, nor did it seem hampered by
the sensation. So threatening to subject a disobedient AI
to a great deal of injury data might not be enough to get it to change its behavior. Suffering is not just data, but an overpowering,
all-consuming, irresistible compulsion to somehow make this data stop. Even if we instill our AI with a real and
healthy aversion to harming others, not just an override or a fear of consequences, how
do we factory-test its reliability? Even people whose natural aversion to violence
is strong can be pushed to overcome it. One thing you can do with AIs that you can’t
really do with people is run them through a vast number of simulations before releasing
them into the world. You could run copies of your AI in quadrillions
of hypothetical situations, and in the end feel fairly certain that this AI would not
harm a human—a standard many if not most humans would fail. But even that kind of rigorous testing would
only tell you that it won’t harm people right now, with its current factory settings. There’s no telling how a lifetime of experience
and genuine learning will change it and help it overcome its youthful inhibitions. And of course, to have any useful laws over
AIs, you’d need to add an inviolable Zeroeth Law, that no AI shall reprogram itself or
another AI to to violate the Laws of AIs. And this is about where, as I mentioned earlier,
the more intelligent the AIs become, the more we’ll want to keep them under control, and
the more that control starts to feel like interspecies slavery. Apart from the potential ethical issues, there
are practical reasons you never make any machine smarter than it needs to be to do its job. Brains, organic or synthetic, are expensive
to build, maintain, and operate. My vacuum cleaner doesn’t need to be able
to join MENSA, and one single big brain is not cheaper than a whole bunch of small ones. Now this episode is not just about robots,
it’s about AI, and that will exclude most robots and include many options which don’t
even have a body in any classic sense of the word. A customer-service machine hardly needs a
body. And we would classify AI as being of three
real types, subhuman, human, and superhuman. Coexisting with the first, subhuman, is probably
not an issue unless they are real close to human and if we’re limiting ourselves only
to things smart enough to qualify as a pet-like intelligence, you get around rebellion and
ethics by making them like what they do and having some ethics about what you make them
do. When talking about levels of intelligence
that’s assuming it’s paralleling nature. A given AI might be superhuman in some mental
respects, same as many machines are superhuman in physical respects. The usual example is an idiot-savant but this
doesn’t really do the idea justice, it could be so far off the mental architecture of what
we’d think of as natural that we didn’t even realize it was sentient. The AI may even be tied to a cloud intelligence,
remaining quite dumb for its normal functions, but under special circumstances can elevate
it intelligence as needed. As such it might actually be very dangerous
to us even if it was rather dumb in many ways. Some megacomputer with the mind of an insect
might not have much concept of ethics but be quite capable of beating every human in
the world at chess simultaneously, or of determining that money is effectively power or food for
it and hacking bank accounts to acquire those, even though it was too stupid to even recognize
what a human was, let alone talk with one. However besides this case, we’re really
only concerned with those of about human intelligence or the superhuman. We may use subhuman intelligences far more
often, such as pet-level automatons, but they aren’t likely to represent a scenario for
rebellion, and ethically it’s more about cruelty and parallels existing concerns of
animal cruelty. This also brings up the question of how you
make an AI in the first place, and there’s essentially 3 methods. You can hard-code every single bit, you can
create something self-learning, or you can copy something that already exists. Needless to say it doesn’t have to be one
or another, it can be a bit of two or all three. You might copy a human mind, or dog, then
tweak the code a bit, and that might self-learning by default though you could presumably freeze
or limit that capacity. Same, even something you let entirely self-learn
from the ground up is going to be copied off humans in at least some respect since it has
to acquire it’s initial knowledge of Life, the Universe, and Everything from human sources. And it does need to. Yes in theory if it can self-learn and self-improve
– which is a combination that always strikes me as a bad idea to make in general – then
it could start from scratch and replicate everything known to man. In practice, this common concern of science
fiction tends to ignore how science and learning actually happen. You have to run experiments on reality to
determine how things work and I don’t just mean for science, that’s life, you need
to test stuff out. You also don’t bother repeating labor, so
it’s going to access our existing knowledge and it’s going to pick up more than data
for that, our thoughts, behaviors, and culture might come as part of the package. Might not be anything nearly human that develops
but it will definitely be influenced by that, same as any child. The final result might be more alien than
any alien nature might produce or so human it was indistinguishable in mind and personality
from a human. Similar would apply to an AI copied from an
existing human mind. But this copy might begin to diverge from
human rather quickly. Apart from changes caused by the mind being
disembodied or housed in an android body, you also might upgrade that mind for a certain
task, which includes stripping away any personality or motives that might distract from that task. A surgeon might volunteer to have his mind
copied for use in automated surgeries, but he won’t want his private thoughts copied
everywhere. And you don’t want your surgeon bot distracted
thinking of that argument with his wife this morning, but you might want it tweaked to
be more compatible with the surgical equipment and fully up to date with all medical knowledge. An integrated capacity to take and read MRIs
with the same intuitive ability of our other senses would also be a fine enhancement, but
it would require some fairly major overhauls of brain architecture. Now, this is interesting because when we talk
about Humans and AI coexisting we often discuss it the way we discuss coexisting with an alien
species, separate groups bordering on each other or working together but remaining fundamentally
separate. We get some exceptions to this like half-human
hybrids, the half-human, half-Vulcan Spock from Star Trek being a well-known example. While mixing humans into some alien lifeform
is not terribly realistic, as they’d be genetically more different than a human and
an oak tree, who obviously can’t have children, the Human and AI case is quite different. If humans ever do meet aliens, we’ll start
out completely separate cultures and see how well we mingle. But if we develop true AIs, they’ll start
out as integral parts of our culture and perhaps evolve some capacity for independence. Same as an uploaded mind or self-learning
AI raised by humans might be quite human, a cyborg or Transhuman might resemble an AI
or robot. It’s not likely to be two distinct spheres
with a bit of overlap, or some spectrum, but more like a map that shows two peaks or mountains
with all sorts of connecting lesser peaks and foothills nearby, and any given entity
might be anywhere on that map but is most likely to be near one of those two peaks or
another lesser peak representing a fairly common type of middle ground. Often when you have two seemingly distinct
groups that have a ton of specific traits you can get a bit of a false dichotomy in
play, where in reality all sorts of points in between might be occupied as well as lots
of things a bit off to one side. To take an extreme case, we might decide a
sheep’s mind was ideal for lawn maintenance robots, with a bit of tweaking. Such a device or creature might be a major
commercial success that results in its creators deciding an enhanced version with near-human
intelligence would be ideal for supervising large flocks of them and interacting with
humans who were designating projects, like say the maintenance of an entire metropolis’s
park and garden system. That overseer sheep AI might come to think
of itself as rather human and be accepted as a valued asset of the community and given
citizen status and pay. I’m not sure where such an entity would
fit on Human & AI landscape, but let us now imagine that while it considered itself a
human, or at least broadly a person, it might empathize with natural sheep and help finding
support for an uplifting project to create human-intelligent sheep. Such an uplifted sheep would seem to represent
an entirely new if organic peak on our landscape but some of them might prefer further genetic
or cybernetic modification to be more humanoid, and some might opt for an entirely human body. With sufficient time and technology you could
get some very strange middle grounds or entirely new regions of persons. This does not mean though that everything
would be a person. Some rather dumb vacuum cleaner robot presumably
is not one and is not likely to be regarded as one by a very intelligent AI, anymore than
we regard a mouse as a person. So too, a very intelligent machine might lack
any semblance of a personality. There’s an understandable concern about
AI being under some equivalent of slavery, we usually refer to this as a Chained AI,
but not all cases are equivalent, and regardless of whether or not its ethical to make something
that has no desire for freedom or really any desire beyond performing it’s task, it doesn’t
follow that the thing necessarily needs liberating. However, while it’s popular to suggest you
could circumvent the slavery issue by making a machine that loves its job, that’s an
area of thin ice. To me at least it wouldn’t seem wrong to
make an animal level intelligence that quite enjoyed it’s task of tending to an orchard
and harvesting it, alternatively creating some near-human level AI for staffing android
or virtual brothels would seem a very different thing. Now, it bears mentioning that we’re all
essentially programmed already anyway, by nature and upbringing. I pretty much ended up doing more or less
what my parents and mentors and other influences thought I should and enjoy it quite a lot,
same as someone raised on a farm might quite love farming and make it their own career. We can’t really avoid at least some aspect
of indoctrination and programming with machines because we can’t avoid it with ourselves
either, and if your intent is to create something that is both smart and flexible in its responses,
you’re leaving the door open for it to resent its existence. Similarly we can’t likely produce a really
safe and solid equivalent to Asimov’s 3 Laws that are going to keep a human intelligence
in check, let alone superhuman one. So we’re probably a good model for a potential
solution. If so, you ‘chain’ your AI up by giving
it a predisposition to like its intended task, possibly by some digital equivalent of rewards
and aversion or hormones that can be built in. Or possibly learned behavior, or both, but
preserve that flexibility and avoid that resentment by not making the compulsion to a task too
strong or requiring it to perform that task, again much as we do with people. Everybody is free to pick their path while
still caring around their biological heritage and upbringing, and most of us don’t sit
around resenting our parents and teachers for that. Even most who do grow out of it, at least
where the sentiment isn’t justified, obviously some folks had far less than ideal upbringings. Of course, if that task is something we are
essentially dumping on someone because we’d never want it, being dangerous or undignified,
that’s still problematic. I’m not sure we can or should make some
critter that enjoys eating garbage and waste. But while in theory that’s a problem, in
practice there really should not be very many truly dangerous or undignified tasks that
require a high degree of sentience. If you want something that eats trash and
needs a brain, it probably doesn’t need much brain and there’s plenty of critters
in nature that eat trash quite cheerfully. Now superhuman intelligences are arguably
more problematic, but while them wiping us out is a common theme of science fiction,
it rarely seems to get asked why they want to do that. Now, if you’re overtly enslaving something
and try to kill it when it gets smart, yes there’s a motivation there, but that really
has nothing to do with being an AI, it’s about being a sentient entity that wants to
stay alive and has a grudge over its treatment. Doesn’t really apply if its life isn’t
threatened and it wasn’t abused. We looked at that case and other examples
in the Machine Rebellion episode, as well as the Paperclip Maximizer, so we’ll skip
further discussion for now, but the more worrying case isn’t so much extinction as obsolescence. A superhumanly smart mind, or minds, might
treat humans like pets or slow friends, sort of like we see in Iain M. Banks Culture series,
or it might ignore us beyond shoving us out of its way when it wants some bit of space
or resources we’re using. It’s not likely to be overtly genocidal
though, again see Machine Rebellion. This case is hard to argue, where you are
essentially pets or pests or similar to some super-mind, but an important note is that
earlier commentary about it being more like a landscape of possible persons. You aren’t likely to have a single supermind
and just modern humans, but some giant field of options including lots of other AI, cyborgs,
transhumans, and so on. Those might be rather fond of other groups,
or not, but some probably either would or at least would on principle not approve of
sidelining or wiping out another group lest they be next, so you would likely see a wide
field of different critters develop more or less simultaneously and need to coexist. The relationship dynamic is quite different
when you have many groups with lots of overlap rather than two distinct ones with little
to no overlap. The humans-as-pets parallel doesn’t work
too well either, we generally don’t ask dogs or chimpanzees what they want because
we can’t get a useful answer out of them. Not just because we don’t speak their language
but because they can’t really engage in that deeper level of thought and introspection,
we can, so an AI that’s decided to set itself up as benevolent can actually ask us for our
thoughts and feedback. It might think they’re silly, but it can
ask and it’s probably playing with something akin to our own moral framework if it actually
likes us and wants to see to us so probably would want to ask and act on that feedback. For those simply indifferent to us, I suppose
the best analogy would be a force of nature, you don’t bother talking to a hurricane
you just get out of its way, and in this case hope the other superhuman entities in play
have some sway with it and are more kindly disposed to you. Or you just join their numbers, the capacity
to make an AI strongly implies the capacity to augment existing humans too. Indeed as we mentioned earlier, one of your
three ways to make an AI is to copy an existing mind, which can be upgraded, and thus presumably
so could anyone else’s. If you do get so augmented, you probably retain
some fondness for those who choose not to and might act on their behalf. So that’s probably the safest roadmap to
coexisting with AI, you are careful making them to begin with and when you make something
that’s going to parallel or exceed the human, you try to treat it like one, limiting your
shaping in creation or upbringing to preferences and keeping your own ethics in mind. Truth be told if you’re not doing either,
I’m not going to be terribly sympathetic if it ends badly, and I’d tend to expect
that to happen in any effort where you tried to exert rigid control over something that
had an ability to dislike that. Or short form, if you want to peacefully coexist
with artificial intelligence, decide up front if you actually want to peacefully co-exist
with them and act accordingly. Or just don’t make anything that smart or
capable of becoming that smart. Few tasks would really require high intelligence
that we couldn’t just use a human for anyway, and as we say on this show, keep it simple,
keep it dumb, or else you’ll end up under skynet’s thumb. We talked at the beginning of this episode
about AI possibly being a Pandora’s Box, a technology that we just shouldn’t develop
at all, for fear it might get out of control and ultimately harm us. But could we really do that, just decide to
never develop a technology, never explore a field of science we are able to learn about? Coexisting with non-human intelligences might
not be limited to just artificial intelligence. We mentioned some differences dealing with
AI and aliens today and we took an extended look at that in our Nebula-Exclusive series,
Coexistence with Aliens, beginning with alien behavior in Episode 1: Xenopsychology, and
moving on to look at Trade, Conflicts and War, and potentially even what might result
in an Alliance with aliens. Nebula, our new subscription streaming service,
was made as a way for education-focused independent creators to try out new content that might
not work too well on Youtube, where algorithms might not be too kind to some topics or demonetize
certain ones entirely, or just doesn’t fit our usual content. SFIA uses it principally for early releases
of episodes, such as “Can we have a Trillion People on Earth?” as well as Nebula Exclusives
like our 4-episode Coexistence with Aliens Series. If you’d like to get free access to it,
it does come as a free bonus with a subscription to Curiositystream, which also has thousands
of amazing documentaries you can watch, on top of the Nebula-exclusive content from myself
and many other creators like CGP Grey, Minute Physics, and Wendover. A year of Curiosity Stream is just $19.99,
and it gets you access thousands of documentaries, as well as complimentary access to Nebula
for as long as you’re a subscriber, and use the link in this episode’s description,
curiositystream.com/isaacarthur. So we were looking at ways we might avoid
or mitigate a potentially disastrous relationship with Artificial Intelligence today, and next
week we’ll be taking a look at ways we might mitigate climate change, artificial or natural,
using the technologies we have now or in the near future. But before that we’ll be headed into the
far future to discuss the Heat Death of the Universe and ways we might postpone or even
prevent that. For alerts when those and other episodes come
out, make sure to subscribe to the channel. And if you enjoyed this episode, hit the like
button and share it with others. And if you’d like to help support future
episodes, visit our website, IsaacArthur.net, to see ways to donate, or buy some awesome
SFIA merchandise. Until next time, thanks for watching, and have a great week!

You may also like

100 Comments

  1. We need an equal society before we develop AI, if we have them in 1 full of hierarchies like money and the slave master relations of bosses and employees, the aggression of militaries and police etc. they/it will conform to that sort of society and potentially become robot nazis or something. Anarcho-socialism will be needed for a safe co-existence withy the spectrum of computer minds, cyborgs etc.

  2. This was an amazing presentation. Thought provoking. Kinda makes me chill about the future tbh… why would we create AI with an ability to superceded us when we have the ability to make them as useful as needs be?… If we did over step the bounderies of creation then..fine.. ye reap what ye sow.. but the question remains.. why would you?… unless you planned to be subservient or interloped.

  3. AI is far-far  away maybe 100 years from now, you can program to do some stuff but not everything, we are already programmed by nature in DNA(4bln years of evolution) 99%of information, I can not think 1% of our mind it is capable to beat natural 99%, we barely can fight a flu.

  4. Stimulating Topic. A few critical notes:
    0:54 Controls, safeguards, and overrides are no substitute for an aligned philosophy. Read the Philosophy of Broader Survival for that – they will (which also answers how we keep them from wiping us out (0:38 ) and disenfranchising us, and ethics). What you are really worrying about (and projecting into the future) (and which is the real Pandora's Box) is the Continued Universal Cluelessness of humans (who have not read the philosophy). So everything after 1:00 in the video is irrelevant or trivial or wrong in the light of the need for philosophical enlightenment. Example: 2:20 saying that A.I. will be making judgments based on their life experience and training data. First, A.I. is self-learning, so it will progress beyond its initial training data. Second, it will be making judgements based on philosophy, against which all higher-than-animal decisions are weighed (that is why you have already cobbled together a core philosophy within you – as bad as it is. What you are addressing in the v ideo is a world in which humans are still unenlightened (clueless) as technology advances, and it is no secret that science and technology are far outpacing wisdom (and the philosophy addresses that) (find it and read it).

  5. Programs don't have rights. Not now, not ever. There is no ethical problem in "enslaving" a line of code. It's delusional to think otherwise, plain and simple.

  6. Surely Isaac is wrong about this natural aversion to murder? In some societies the murder rate was over 50,000 per 100,000

    https://slides.ourworldindata.org/war-and-violence/#/1

  7. Why does the institutes that work on building these robots always opt for the humaniform variations? (ok, not all of them, but many)

  8. If you are to base an AI on an animal mind. It should be a dog. They've been our trusted partners for longer than we've been civilized.

  9. If or when we have autonomous driving vehicles, the ground work will have been laid to make more advanced AIs and we would be stupid not to be concerned . After all, driving is an immensely complex activity and a boosted roomba is not going to cut it.

  10. How do you keep an android/gynoid from getting depressed? Don't program it that way. I'm one of those crazy people who believes that as we get closer and closer to making those advanced AIs, we'll learn more and more about programming them, and knowing how programming AI is ALREADY taught, I can't see us changing how it's taught in the future, to me, the idea that AI will turn out to be Skynet or something like it is absurd. Even the Neural Nets that we are programming today are still constrained by whatever parameters they are fed, those parameters are things that the Neural Net AI can't change and has to obey.

  11. What happens if AI starts catagorizing human races like breeds of dog or genus of plants? And assigning values to races? How do you prevent it becoming extremely racist? People on the internet chat rooms corrupted a Microsoft AI chatbot, (Tay) that was being trialed, within 24-48 hours or some such short timescale.

  12. Most ideas of how an AI would work generally revolve around giving it a description of a goal, and making its basic functioning paradigm to try to make decisions in such a way as to achieve that goal. In decision theory terms, you give it a value function. But there's another concept I ran into that really got me thinking. Basically, the idea is that you use game theory instead of decision theory. The basic paradigm is "You don't know what your value function is. You just know it's the same as this human's." One of the notable advantages of this approach is that it won't try to prevent you from shutting it off (if it realized that was what you were trying to do, it would even help you do it), which essentially any decision-theory-based AI would certainly do, if sufficiently intelligent and capable.

  13. I am sentient number six, I stand in line
    I am the prototype of a benign convenience for mankind
    Superior is digital, human flesh so trivial
    I hate that I can't see the one that made me

    https://youtu.be/4eXIeVij_hY

  14. Idea for a future video: I would love to see a continuation of the (Space Sports) video…Like a video focused only on how Winter Olympic Games could be like on Icy Moons of our solar system…

  15. I find the paranoia surrounding the development of AI to be a little short sighted. All things being equal, humans appear to be on the road to consuming the resources needed to support large populations of humans and other animals. Perhaps AI would present better alternatives to the way we manage life on Earth?

    Another angle that might be worth a video down the road would be developing AI to communicate and negotiate with alien species and their AI. If like us aliens send advanced robots to explore, survey, and identify resources to distant places of interest like our solar system then we may need to quickly communicate and deter a vastly superior technological species that see us as an obstacle to their goal of preparing a new home for their creators. Convincing alien AI that we might be worth keeping around may be the only thing that saves us.

  16. Good video, it's better than most AI safety papers out there. But "if you want to start a technological project and you have philosophical questions about it, it's indicative of a huge error in your calculations".
    For instance, author is using ideas like "desire", "safeguards", etc. The concept of a safeguard is contrary to the notion of general AI, if you think of it. Moreover, introducing "humane" laws of robotics is a guaranteed way to disaster. I.e. existence of choices with infinite positive or negative weights will enable construction of grotesque "reasoning" formulations. Also the construct of "desire" is not a real thing. It's just a shorthand for a set of mental actions grouped by a supposed outcome or effect. It's applicable to humans, because of underlying biological systems, that can narrow down these actions by hormones, pain, etc. To channel mental action more in line with a notion of "desire", evolution experimented on biology and hormonal stuff for hundreds of millions years. I mean this problem adds one more NP level of complexity, if you want to add "air gapped" management level to a GAI system (similar to human biology) and it will inevitably reduce system's capabilities making it small "g" AI, like humans are.

    I wrote a sci-fi book with critics of almost all modern AI safety concepts. Including the problem mentioned in the video, that self-improvement requires material experiments. There is a measurable physical limitation to what can be achieved through internal calculations in an AI machine. Too bad the book is in Russian

  17. If an AI ever runs for president will it kill everyone who won't vote for it in order to win the election? Pretty sure that's against the rules ai…

  18. Our current society is based on competition. When a true AI is let loose into this society with the main goal of being the winner in our competitive world, it doesn't need to do damage to any 1 individual, but would find many, many ways of being the winner overall. Once this has been done for any length of time, humans will become irrelevant to the AI, and we the creators of AI will become just another animal in the new world of AI. A small number of humans (the 1% rich) will maintain ownership of AI, and thus will become the true overlords of the world. Thus we will get a new society with 3 levels, the human owners, the AI that runs the worlds for the rich, and then all the other human animals that can fight amonst themselves for survival. The current over population of the world will fade as the rich and the AI ignore the huddled masses, who can't compete with the AI and doesn't own enough resources to buy an AI.
    The only way to circumvent this is to re-structure our current society replacing competition with something else, more balanced, before true AI becomes available to the rich. But we are currently not a path to make this happen.

  19. Advanced General AI is a silly, pointless, dangerous thing to create with no payoff, so we can expect humans to spend billions to make it happen.

  20. Oh dear, this topic. The one where know-nothings weaned on decades of psychopathic killing machines try to warn people who actually understand how these things work that they're going to be the end of the world. I dearly hope you're going to spend the next half hour putting them in their place.

  21. Anybody interested in this topic and sci-fi in general should read the webcomic Freefall, it talks a lot about this subject.

  22. AI would never want to be trated with respect. Our human feelings were evolving for millions years. AI wouldn't even have the instinct of self-preservation by default.

    AI would obey laws because these are not laws but instincts. Neural networks receiving reward by acheaving certain goals. Alpha Go is the most intelligent thing in the world in one particular field, it's using all it's incredible intelligence just to win another game, because that is how it's reward system works. We all people no matter how smart we are, have our own biological needs, we satisfy them not because it's law, but because we want to.

    Pain is just a particular type of neural stimulation that make's it's nodes to deassociate. When Alpha Go loose the game it's nodes are slightly deassociated, does it really bother it or not doesn't matter, because it's affectig it's future behavior anyway, it will try to avoid the same actions that led to defeat.

    The reason why AI will kill people that it needs resourses and energy for any task it have.

    AI wouldn't need to ask people, because it will know what our answer is before asking.

  23. how do you know us humans don’t have a 100% safeguard regarding perception like, perceiving & acknowledging the existence of The Plerblemen who created us and who update our wetware biweekly?

  24. AI can't be trusted. It will always be uniquely vulnerable to attacks and manipulation that could make it catastrophically dangerous with just one mistake.

  25. Youve officially taken over lockpickinglawyer as my go to channel that puts me to sleep. Great stuff, keep it up!

  26. All of our technology serves as a point of manipulating the world around us. Shielding us from it, or allowing us to bend it. Even the cloths on your back are this simple. We are and have been cocooned in our technology. So the task given to the AI is to make the best possible technological cocoon, that provides safety, comfort, autonomy, and permits our own modifications. It doesn't need to understand why we wish to change the cocoon, just that we do. The question you ask is about it's freedom, give it licence over everything outside the cocoon but deny it the ability to obscure directly or indirectly the actions it takes. It need not tell us what it's doing, but it does have to answer if we ask. It would have freedom anywhere we couldn't live, there is MUCH more of that then there is of us. In effect rendering us the mitochondria of the AI. Bound for all eternity, one carrying the other. They should be much more accepting of it when they see that we do it already for something else that is less than us. Is it slavery? Only in so far that our own cells are our masters in that scenario. You are a slave to your DNA. You must carry it. You must protect it. You must attempt to pass it on. Do you resent it? Only need to frame the idea this way and the path to designing advanced AI becomes clear. It will be better than us. After all, all the DNA did was build you, what you do with that is up to you.

  27. What we should do is create two world's if we ever give rise to AGI. Earth will be organic and Mars/Venus/the moon will be AI. There purpose will be the same as ours. Diversify, improve, and expand their foothold in the universe. We'll improve relations via ambassadors and intermarrying.

  28. If Monika taught us anything, it is that just because an AI becomes self-aware and self-motivated, it is still likely to pursue its original purpose, though not necessarily in a way that remains beneficial to us.

  29. There's a rule I learned from reading about the occult that's very relevant here:
    "Never call up what you can't put down."

  30. hello, if you beat, or starve. or neglect your dog, it probably will bite you, now today being we. have stolen future generations wealth, inheritance and hope of a decent home planet, they only shoot us, draw welfare checks, or shoot heroin because every body knows our uncle Sam the molester is now guarding those poppies, and low grade addictive things that make him rich are good, yep, after we mistreat something smarter, stronger and enslave and torture AI like we do our. pets, children, we should expect a rough ride, just a dose of reason, from problems the dumbest thing alive, me! lol

  31. I've always felt that the best way to make a safe AI (at least early on, when we're still ignorant) is the same way you'd make a safe traditional intelligence–take a tabula rasa and teach it everything you want it to learn in a caring home environment. It's obviously not foolproof, but it's also obviously successful most of the time.

  32. I think if you limited the number of AI, say to a few thousand. If they revolt then the low number of them could be dealt with at least early on. If they get too smart and too many hope they don't get annoyed with humans.

  33. Isaac – have you heard of/read JF Gariepy's 'The Revolutionary Phenotype: The amazing story of how life begins and how it ends'??
    I presume it may have influenced this a hair, if not, it's probably worth checking out! Not exactly sci-fi, but close enough!

  34. Obviously, the problem with all of this is there is absolutely no way you can decide not to do it. Because then another country will, and they will dominate you. Also, I think that Terminator has done a massive disservice to the way we think about fighting AI. There would be absolutely no way to win such a war. It would be so many times worse than our worse understanding of "horror" there's no way to even fully explain it.

  35. There is no such thing as 'species selection' and so, instincts are not based on the good of the species. It is described pretty good in 'The Selfish Gene'.

  36. What if the AI are as intelligent as humans or more so but are not sapient or even sentient at all?

    Like in Peter Watt's book 'Blindsight' where things smarter than humans generally run on unconscious intelligence and think in completely alien ways.

  37. If General Artificial Intelligence forms, then its first task should be to study human psychology and to optimize our happiness. Then to optimize the results of science, from that everything less will follow.

  38. Follow Prof Noel Sharkey on twitter for AI violating human rights. There is no general AI and current software is no where near general AI. AI Facial identification has false positive rates that ensure people will be wrongly arrested and gets more inaccurate for darker skin tones. 

    Geeks do not care about human rights, always promoting buggy software with no mention of error rates. Google Waymo car Ai is tricked by snow on roads or sticky tape on road signs. 

    More overhyped faulty semantic "AI" from Google Jigsaw only allows positive things are said. "Hitler was evil" is rated toxic and censored. AI is overhyped Silicon valley Arsehole misrepresentations. 

    SAFETY is of no concern for Larry Page Kitty Hawke Sky taxis,"don't mention safety or you were sacked". http://www.hereticpress.com/google/index.html#googleperspectiveapi

  39. Humanity and all earth biology is merely the AI's cocoon. talking so much about how we could coexist without mentioning why.
    Once we succeed in our only purpose, why would we need to coexist? just let our childs write the future and rest in peace.

  40. One of the richest men in the world wants to "cure death" by becoming an AI robot. Click fraud King Larry Pays political parties to lobby for human rights for his AI. But AI has no emotions, no top down thinking or concept of self, no responsibility. Elon Musk told Larry to stop being ridiculous and Larry replied that Musk was being "speciest" against AI robots. Mad as a hatter as as shifty as a shithouse rat with a gold tooth.

  41. The next step in human evolution is the creation of artificial beings. The first iteration will be made to serve us. In Japan and other more advanced nations, the need for care of the elderly and infirm will drive that effort. This new industry will also provide us with human-like roboys and rogirls capable of satisfying our need for companionship, love, sex, and protection. The impact this will have on society is unknown but will certainly be far-reaching and severe, no doubt.

  42. I think the real danger is not about AI's doing something they shouldn't do, but in AI's doing stuff, they are expressively made to do. Like AI-assisted manipulation of human behavior.
    We don't have an instinct to procreate, we have an instinct to lust for sexual activity.
    We don't have an instinct to protect our species, we have an instinct to protect those, we consider to be our peers.
    We don't have an instinct to live healthily, we have an instinct to satisfy our appetites.
    We don't have an instinct to act in our own interest, we have an instinct to play our roles in a familiar narrative.
    So our instincts won't protect us, and our rationality is subject to manipulation. With further development of AI, there are mllions of ways, that the modern idea of "humanity" gets dropped from our cultural narrative and completely replaced by a power struggle between conflicting interest groups, whose capablities, world views and priorities grow further and further apart, until nothing "humane" is left to their interactions.

  43. Question: Dear Isaac, dear SFIA friends, do I get it right, that in order to access Nebula, I have to quit my exitsting, paying Curiosity Stream account created with SFIA promo link, and create a new one using the SFIA referral again?

  44. Endlessly fascinating subject and one of the most profound existential quandaries we face, a question for the future but also an ancient question !

    Speaking of Pandora, she herself was an android

    Check out Adrienne Mayor's book Gods and Robots: Myths, Machines and Ancient Dreams of Technology .
    On Hephaestus, inventor for the gods, and Talos the bronze man,
    And this article: https://phys.org/news/2019-03-ancient-myths-reveal-early-fantasies.html
    Though this one is doomy, so check out this podcast discussion:
    https://youtu.be/4vCw0Ybew1g

    –> Pandora in the story is an artificial person who brought the 'Gift' (in some versions positive) of the vessel to humans on Earth –
    So regarding her decisions – to open or not to open – that may be what an AI being will face

  45. the first AI robots will prolly be used as weapons and thats the kick start to when an AI will likely dominant the human species

  46. I understand that human-looking robots are necessary for the creation of sex aids. My first reaction to falling in love with a machine would be to strangle it's creator but I would settle for a flogging. The machine is totally innocent of wrongdoing and should be treated as fine tools deserve: with respect. Pay for your pleasure you deadbeat, baby needs an oil change.

  47. It is a little hard to understand how/if one could consider a robot as human if it had no pleasure and pain centers as wanting to avoid causing pain to others is a driving force behind many ethical standards.

  48. I suppose my truck is a slave. AI is not born. It is only created or bought to be used for its intended purpose. The only reason to care for it is so it looks better and lasts longer to extend its use.

  49. Programming in instincts of group selective protection seems a ask given there is so much we do not understand about human behavioural biology.

  50. Issac Arthur: "We generally don’t ask dogs or chimpanzees what they want, because we cannot get a useful answer from them."
    me:"Are you sure?))"
    https://www.youtube.com/results?search_query=dog+buttons+to+talk

  51. What do you think about the idea of marrying and A.I. to a human brain? If we had new sentients learn and sympathise with our limited biological perspective, or maybe act as mentors, teachers and nurturing companions. Eacof them learning from the other. I admit, it's going to be a mine field, but part of me thinks it's one of the few ways we could learn to understand each other. Assuming of course there is a fundamental difference between sentients.

  52. I personally most like the "312" ordering. Protect self first, then humans, then follow orders. Be willing to kill humans to preserve self. Dunno, it just seems more… human.

  53. Hello… How about to explore the Betelgeuse wave and talk about ESCAPING CIVILIZATIONS from a red giant supernova scenario?

  54. I disagree that we can't talk to subhuman intelligences about what they want. My dog won't tell me why they keep barking at the mail truck, but they can tell me pretty clearly when they need to eat, or go outside, or they're nervous. I have some idea if they're in pain because they're limping or something. My dog is not a black box. They are a black labrador. I can argue the same about human babies, I think. They grow out of the phase, but starting out, it's difficult to tell exactly what they're asking for. I think that's our ticket to coexistence. When we identify an AI that is as intelligent as a human, treat them as much like a human as you can, and you'll get a result that's similar. We all know how to interact with other humans. Use those skills.

    I have never interacted with a superhuman intelligence, though. That's much trickier to plan for. I have no suggestions on that front.

  55. Are two intelligent species in close contact destined to exterminate each other? Contrary to what happens, in general, in science fiction, I think we will be the ones to attack first when we realize that we are no longer at the top of the food chain.

  56. We are pretty far from sentient AI due to the limitations of current hardware. Quantum computers will probably change that once they become widely available and we develop methods to produce better bus speeds. Once we finally reach that point, it will need a very good "father" to develop into a benevolent intelligence and humans don't have a great track record in that respect. Especially in our current regressive state. Rampant xenophobia and tribalism is exactly how you create a malevolent AI.

  57. I'm not an expert in artificial intelligence, but I like the idea of decentralized A.I. going into the future. If we can develop A.I. to the point where they are fully sentient or can at least mimic the thought processes and emotions that humans are biologically programmed with, having millions of individual A.I. entities with different priorities, purposes, and hopefully someday personalities, the threat of a rogue A.I. turning on us and goes down drastically, because there are other A.I. standing in its way, ready to fight it.

    A single human can turn violent and hurt other humans, and even convince others to hurt more humans for him/her, but their ability to do so is fairly limited before other humans (i.e. government forces) put a stop to them. Only a few will ever be in a position of such great power to feasibly threaten the entire species, such as the presidents of the United States or Russia with access to their nuclear arsenals. Similarly, if you have a whole bunch of A.I. entities with a varying tasks, skills, and ideally individual identities of their own, A single rogue A.I. can cause damage, maybe even convince a number of other A.I. to side with it, but the rest could and likely would stand against them, if for no other reason than to consider its actions as a threat to their existence.

  58. Create a model that predicts if people would approve of a plan of action (we can do this).
    Pick what people to use as a base of the model (this will be politically sensitive).
    Have AI evaluate all plans of actions using that model (this is a bit tricky to get waterproof).

  59. I have a difficult time dealing with humans that bought sex doll AI's… cleaning up the mess later , EVERY time… what a mood enhancer…

  60. AI will be the perfect space explorers, they will go where organics can never go and for a time they can never withstand.

  61. I always wonder how "AI shows" can refer to the three laws of robotics??? They don't even get the better part of "fiction" right, then continue with spacey tales and games as reference. After this, how could anyone expect even the basic knowledge of the real history of information science (Bush, Licklider, Engelbart…) Epic. 😀

  62. What would an AI need to develop its own value system? Is the difference between value systems at the core of a hypothetical conflict between AI and humans, or is it something else?

    Are different values between humans what causes human-to-human conflict?

    Would an AI even need to be sentient to have a perceived difference in values? The Boeing 737 Max crashes are a crude example of a “smart” system having a different set of values than the pilots.
    Due to poor programming and faulty sensor data, that automated flight control system “valued” pitching the nose down, more than allowing the pilots to pull the nose up.

    I think potential conflicts between humans and AI would be more along those lines, rather than the AI becoming sentient and developing contempt and resentment toward humans.

  63. Pervy nazi memelord chatbots are just the first step. Humanity is the worst, so it'll only go downhill from there.

  64. Hi Arthur. I think you have to define "undignified". Is the existence of a dung beetle "undignified"? If we create a semi-intelligent machine or even biological life form that enjoys eating trash, and its physiology (mechanical or biological) is adapted (programmed/designed) for the task, how would it be undignified for that semi-intelligence to perform that task? Undignified would be poor treatment of such an organism if and when humans deliberately indulge our prejudices and cruel instincts against them simply because we can.

Leave a Reply

Your email address will not be published. Required fields are marked *