test content
What is the Arc Client?
Install Arc

Discovery and the 24th century technophobia

nrobbiecnrobbiec Member Posts: 959 Arc User
It seems like after the latest episode of Discovery (obvy spoilerinos) with Section 31's ai going rogue and planning the Reaper invasion after indoctrinating Airiam that we may be seeing an origin for the technophobia and mistrust of artificial lifeforms that was present through the 24th century.

Data was in Starfleet for like 30 years before he was offered the ability to be an xo. Could his slow progression through the ranks be a result of prejudice on Starfleet's part after the shenanigans they went through with Airiam?

Holograms that achieve sentience are treated as glitches and even ones like the Doctor have more than an uphill battle in order to even be declared a person under the law. Zimmerman himself even says "why is everyone so worried about holograms taking over the universe?" Could this be because Control was using holograms to do just that and literally killed and replaced people with them...

These are just two major examples that impact main characters trying to obtain equality in a society that is a little bit bigoted against artificial intelligence. There are more like the exocomps and generally people tribbling themselves when computers start to think. But it could be that we're seeing why for the first time.

Comments

  • mustrumridcully0mustrumridcully0 Member Posts: 12,963 Arc User
    I think that is what they're going for.

    In a way, the Eugenics and AI think might be a bit of an inversion of Star Trek's usual Utopia - if we go this route, things really go bad.


    On the other hand, maybe it's actually what it "always" was: All the horrible things we might have imagined? Nuclear World War III, Eugenics, AI Rebellions - they all happen in Star Trek. But - we always survive them and we learn our lessons to avoid repeating our mistakes. (Of course, sometimes our heroes need to step up to remind people again - but there always heroes to step up. The system is capable of self-correction!)


    Star Trek Online Advancement: You start with lowbie gear, you end with Lobi gear.
  • starkaosstarkaos Member Posts: 11,556 Arc User
    edited March 2019
    But when did Section 31's AI go rogue? We know that the Section 31 personnel were dead for two weeks and infected Airiam sent a message to Section 31 HQ. However due to that message, we don't know when it went rogue. There is also the issue of when the 'evidence' that proved that Spock was a murderer was fabricated. Although, it is possible that Section 31 fabricated the evidence so Spock would be forced to provide valuable information on the future and wasn't due to fabricated evidence created by a rogue AI.

    There are three possible times that the AI could have went rogue. The AI was already rogue before it received the message from infected Airiam. The message from Airiam made the AI rogue due to it receiving advanced programming from its future rogue self. The AI has its programming in every Federation computers and the probes trip through the temporal anomaly made it insane and the AI fragment decided to have the rest of the AI join in the insanity.
    reyan01 wrote: »
    It's definitely a good theory!

    I mean, they made laws and put measures in place with regard to genetic augmentation due to the Eugenics war/Khan. Admiral Bennett's line in 'Doctor Bashir, I presume' springs to mind: "For every Julian Bashir that can be created, there's a Khan Singh waiting in the wings".

    It might be fair to assume that Starfleet adopted a similar attitude toward holograms and AI.

    This rogue AI incident could explain why holograms and AIs are not present in TOS and the TOS movies, why TNG doesn't use holograms except for the holodeck, and why Data and the Doctor are the only Starfleet AIs in the 24th Century.
  • jonsillsjonsills Member Posts: 10,362 Arc User
    I basically put it down to Frankenstein Syndrome - people always forget the monster only went rogue when the doctor tried to disown him, and assume that the created will always try to destroy the creator.

    I think the problem with Control was that they tried to program a true AI - with Section 31 attitudes and prejudices. One of S31's assumptions is that anyone can be a traitor. Using computer logic, then, the only way to avoid betrayal is to kill all sapient life before it can turn on you. This would have been reinforced if the admirals in charge had tried to shut it down.

    This also gives us insight into why nobody in TOS or TNG seemed familiar with S31 - they ceased to exist after Control was destroyed (which if I were Pike would have happened about thirty seconds after Burnham and the other person whose name escapes me were confirmed to be back aboard).
    Lorna-Wing-sig.png
  • starkaosstarkaos Member Posts: 11,556 Arc User
    jonsills wrote: »
    I basically put it down to Frankenstein Syndrome - people always forget the monster only went rogue when the doctor tried to disown him, and assume that the created will always try to destroy the creator.

    I think the problem with Control was that they tried to program a true AI - with Section 31 attitudes and prejudices. One of S31's assumptions is that anyone can be a traitor. Using computer logic, then, the only way to avoid betrayal is to kill all sapient life before it can turn on you. This would have been reinforced if the admirals in charge had tried to shut it down.

    This also gives us insight into why nobody in TOS or TNG seemed familiar with S31 - they ceased to exist after Control was destroyed (which if I were Pike would have happened about thirty seconds after Burnham and the other person whose name escapes me were confirmed to be back aboard).

    The reason why Control went rogue probably have nothing to do with betrayal, but there will always be more and more problems that have to be fixed. If we put an AI in charge of protecting the Earth, then it will likely kill all humans due to our extravagant nature. It is easier to destroy all humans than it is to clean up our mess over and over and over and over and over again.

    Section 31 didn't cease to exist since there is still the Section 31 TV series featuring Empress Georgiou. However, it is likely that Section 31 went into the shadows to avoid the embarrassment of putting an AI in charge that eventually went out of control. The Admirals would likely keep quiet about Section 31 as well for the same reason. After all, we only know that Section 31 HQ has been compromised not any of the Section 31 ships. So Leland and his crew are likely still alive. We also need to figure out how Leland is responsible for the death of Burnham's parents.
  • ryan218ryan218 Member Posts: 36,106 Arc User
    To be fair, scenarios like this provide the perfect reminder of why we should follows Azimov's Laws of Robotics.

    Even Data, who people have mentioned here a few times, was explicitly programmed by Dr Soong to only harm other life forms in self-defence.
  • ryan218ryan218 Member Posts: 36,106 Arc User
    patrickngo wrote: »
    ryan218 wrote: »
    To be fair, scenarios like this provide the perfect reminder of why we should follows Azimov's Laws of Robotics.

    Even Data, who people have mentioned here a few times, was explicitly programmed by Dr Soong to only harm other life forms in self-defence.

    thing is, Data isn't truly sapient, or truly sentient. why?

    because his morality is hard-coded. He does not choose to refrain from evil, he is incapable of evil. and because of this, he will always be inferior to organic life-at minimum on a moral level, since his morality parameters can be set without his input, making him incapable of truly developing judgement or a moral conscience of his own.

    Except Data himself mentions that he can alter those parameters himself if he so chose - he just chooses not to. Part of the point of Soong giving Data the emotion chip was to give Data the motivation to evolve beyond his programming.
  • nrobbiecnrobbiec Member Posts: 959 Arc User
    > @patrickngo said:
    > ryan218 wrote: »
    >
    > patrickngo wrote: »
    >
    > ryan218 wrote: »
    >
    > To be fair, scenarios like this provide the perfect reminder of why we should follows Azimov's Laws of Robotics.
    >
    > Even Data, who people have mentioned here a few times, was explicitly programmed by Dr Soong to only harm other life forms in self-defence.
    >
    >
    >
    >
    > thing is, Data isn't truly sapient, or truly sentient. why?
    >
    > because his morality is hard-coded. He does not choose to refrain from evil, he is incapable of evil. and because of this, he will always be inferior to organic life-at minimum on a moral level, since his morality parameters can be set without his input, making him incapable of truly developing judgement or a moral conscience of his own.
    >
    >
    >
    >
    > Except Data himself mentions that he can alter those parameters himself if he so chose - he just chooses not to. Part of the point of Soong giving Data the emotion chip was to give Data the motivation to evolve beyond his programming.
    >
    >
    >
    >
    > That choice itself can be a result of programming, Ryan. consider this; if it were altered to evil, would he choose to alter it back??
    >
    > the fact of those parameters themselves, of him being the result of intentional programming, means even given the technical option, he's not going to make that choice, because his programming really doesn't allow him to choose. even given the ability to emulate emotion, he's still just his programming, ergo, still not capable of making a moral choice, because his moral choices have already been made.

    Data and also the Doctor are however shown to take action through will alone which defy their ethical and moral programs without said programs being altered in any way.
    See TNG The Most Toys and VOY Critical Care.
  • starswordcstarswordc Member Posts: 10,963 Arc User
    ryan218 wrote: »
    To be fair, scenarios like this provide the perfect reminder of why we should follows Azimov's Laws of Robotics.

    Even Data, who people have mentioned here a few times, was explicitly programmed by Dr Soong to only harm other life forms in self-defence.

    Asimov himself openly said he only created the laws so he could break them in his robot stories. They were never intended to be prescriptive; he was just inspired by the fridge logic inherent in the Frankenstein model of why humans would make consumer products that could deliberately try to kill them. Hence, protagonist Susan Calvin is an investigator of "industrial accidents" that usually come down to user error, e.g. a human gave a robot a poorly worded command that it misinterpreted.

    Interestingly, in 1980s Bulgarian sci-fi it became something of a meme to invent Fourth and Fifth and Sixth Laws of Robotics. Lyubomir Nikolov even parodied it with a story where a robot kills a human in frustration after the human tried to program it with the 100th Law of Robotics: "A robot should never fall from a roof." Which leads to the 101st Law: "Anyone who tries to teach a simple-minded robot a new law, must immediately be punished by being beaten on the head with the complete works of Asimov (200 volumes)."

    Personally, I prefer Aeon 14's approach with the Phobos Accords: sapient AIs are equal citizens to organics, which includes both full civil and "human" rights, and also means they're considered culpable when they commit crimes (AI courts are conducted by other AIs and are said to be significantly harsher than the justice system for organics). There's also standards in there for their creation, teaching, and treatment. And, some notable exceptions aside, AIs in the franchise tend to like humans and are frequently installed in people's heads.
    "Great War! / And I cannot take more! / Great tour! / I keep on marching on / I play the great score / There will be no encore / Great War! / The War to End All Wars"
    — Sabaton, "Great War"
    VZ9ASdg.png

    Check out https://unitedfederationofpla.net/s/
  • jonsillsjonsills Member Posts: 10,362 Arc User
    Apparently he could have, in the same sense that you could choose to voluntarily go into a cage for the rest of your life. I couldn't make such a choice, because my hard-coded instinctual responses don't allow for such permanent restrictions on my physical freedom. Am I therefore nonsapient?

    Are your own choices, ethical or otherwise, truly freely chosen at each moment, or "programmed" into you by your upbringing? (And if you claim to have a simple, clear answer to this question, perhaps you should go to the nearest university philosophy department - they'd love to hear from you.)
    Lorna-Wing-sig.png
  • starkaosstarkaos Member Posts: 11,556 Arc User
    starswordc wrote: »
    ryan218 wrote: »
    To be fair, scenarios like this provide the perfect reminder of why we should follows Azimov's Laws of Robotics.

    Even Data, who people have mentioned here a few times, was explicitly programmed by Dr Soong to only harm other life forms in self-defence.

    Asimov himself openly said he only created the laws so he could break them in his robot stories. They were never intended to be prescriptive; he was just inspired by the fridge logic inherent in the Frankenstein model of why humans would make consumer products that could deliberately try to kill them. Hence, protagonist Susan Calvin is an investigator of "industrial accidents" that usually come down to user error, e.g. a human gave a robot a poorly worded command that it misinterpreted.

    Interestingly, in 1980s Bulgarian sci-fi it became something of a meme to invent Fourth and Fifth and Sixth Laws of Robotics. Lyubomir Nikolov even parodied it with a story where a robot kills a human in frustration after the human tried to program it with the 100th Law of Robotics: "A robot should never fall from a roof." Which leads to the 101st Law: "Anyone who tries to teach a simple-minded robot a new law, must immediately be punished by being beaten on the head with the complete works of Asimov (200 volumes)."

    Personally, I prefer Aeon 14's approach with the Phobos Accords: sapient AIs are equal citizens to organics, which includes both full civil and "human" rights, and also means they're considered culpable when they commit crimes (AI courts are conducted by other AIs and are said to be significantly harsher than the justice system for organics). There's also standards in there for their creation, teaching, and treatment. And, some notable exceptions aside, AIs in the franchise tend to like humans and are frequently installed in people's heads.

    Introducing Sapient Rights instead of limiting it to just human rights would stop a lot of the stories about robot uprisings in Science Fiction.

    patrickngo wrote: »
    jonsills wrote: »
    Apparently he could have, in the same sense that you could choose to voluntarily go into a cage for the rest of your life. I couldn't make such a choice, because my hard-coded instinctual responses don't allow for such permanent restrictions on my physical freedom. Am I therefore nonsapient?

    Are your own choices, ethical or otherwise, truly freely chosen at each moment, or "programmed" into you by your upbringing? (And if you claim to have a simple, clear answer to this question, perhaps you should go to the nearest university philosophy department - they'd love to hear from you.)

    The upshot of it is, Starfleet (and by extension, the Federation) don't want free artificial beings capable of making their own moral choices, they want disposable servants constrained to be always servants. Slaves in other words, but without the desire to be free.

    what does that suggest about the culture of the United Federation of Planets?

    As proven by what happened to all the EMHs that weren't stranded in the Delta Quadrant. The 24th Century Federation is just asking for another AI rebellion incident with their treatment of sapient holograms.
  • mustrumridcully0mustrumridcully0 Member Posts: 12,963 Arc User
    jonsills wrote: »
    Apparently he could have, in the same sense that you could choose to voluntarily go into a cage for the rest of your life. I couldn't make such a choice, because my hard-coded instinctual responses don't allow for such permanent restrictions on my physical freedom. Am I therefore nonsapient?

    Are your own choices, ethical or otherwise, truly freely chosen at each moment, or "programmed" into you by your upbringing? (And if you claim to have a simple, clear answer to this question, perhaps you should go to the nearest university philosophy department - they'd love to hear from you.)
    What does "free will" actually encompass, what does it mean?

    I might like to think I have free will, but that doesn't mean there is a chance I'd take my car and drive into a group of innocent strangers to hurt them intentionally. If would be horrifying if I'd be capable of that choice, but yet, if it's a choice I'd never take, does that mean my will is still free?
    And anytime we're making a decision, we're basing it on facts available to us, weighting pros or cons, and picking the one we expect the best result, for whatever we find best" in the moment - it's not exactly a random or arbitrary choice - so if we'd repeat the same scenario a million times (maybe via parallel universes or time travel), would at any point we'd make a different choice? If we do, maybe it's free will, but is it in any way "reasonable"? Was all the thinking we put in a choice irrelevant, if the exact same thinking could lead to more than one resulting decision? Wouldn't it make everything random and arbitrary, without sense? But if we would always make the same decision, where is the freedom?

    Maybe a better description of free will can be had by what it's not - by how much priorities that you use in weighing decision outcomes are determined by other people's decision. We're never entirely free of other people decisions, but for example, all the laws in a country might not allow for me to cross a red light, if I feel like it, I can still do it. If someone was holding a gun to my head and threatened to shoot me for doing that, I would have a lot less free will.

    But where would that put Data or artificial lifeforms? And maybe that will depend on the details of the artificial intelligence.
    If Data is capable of modifying his own programming, I think he is capable of free will.
    Like most people, thank to living in a functioning social group, he is unlikely to alter his program to become anti-social.
    But a Data stuck in a dysfunctional social group might be different - maybe he'd alter his program to TRIBBLE people over as he sees fit, because he gains nothing from trying to play nice with them, and it might even harm him, or the few members of the group he appreciates.
    Star Trek Online Advancement: You start with lowbie gear, you end with Lobi gear.
Sign In or Register to comment.