I've been giving some thought to fun things for the game and Holograms just crossed my mind. Imagine if we could have Holograms storming the Galaxy...the EMH Mk I's, uprising from mining dilithium. The Hirogen's holographic army, the mad cleaner and obviously Moriarty. I could personally do without Vic Fontaine, although they're welcome to make it Frank Sinatra and he was supposed to be (only Frank wanted to be made up into a monster to make the cameo) but if it made for some fun content...
It's been 'done' quite a few times true. But I think a large scale rebellion that looks at the free will/ethical issues of AI/Holographic tech (similar to The Measure of a Man (TNG), Exocomp sentience (The Quality of Life, TNG) and Holographic sentience (Ship in a Bottle, TNG, 'Photon's be free' (VOY).
Rather than the 'Terminator' or 'Matrix' scenario's there could be a more thoughtful Trek' based spin in it that looks at the issues raised from the aforementioned episodes, on a large scale, rather than an individual basis.
It's been 'done' quite a few times true. But I think a large scale rebellion that looks at the free will/ethical issues of AI/Holographic tech (similar to The Measure of a Man (TNG), Exocomp sentience (The Quality of Life, TNG) and Holographic sentience (Ship in a Bottle, TNG, 'Photon's be free' (VOY).
Rather than the 'Terminator' or 'Matrix' scenario's there could be a more thoughtful Trek' based spin in it that looks at the issues raised from the aforementioned episodes, on a large scale, rather than an individual basis.
The Federation would not knowingly enslave sapient beings, so the only plausible scenario in which there would be a large-scale conflict would be exactly a Terminator-style "evil machines try to kill everyone because they're evil."
The Federation would not knowingly enslave sapient beings, so the only plausible scenario in which there would be a large-scale conflict would be exactly a Terminator-style "evil machines try to kill everyone because they're evil."
I think you mean sentient; sapient means something quite different.
And the Federation has been known to hover around the border in using/treating sentient beings as 'tools', as can be seen from the aforementioned episode examples. (If you watch Data's 'Trial', even then the rights of AI was not clearly defined, and the judge refused to make any widespread changes to Federation law regarding AI).
And no, a Terminator style 'evil machine' scenario is not required, if you had actually read what I posted in regard to the suggested scenario...
The Federation would not knowingly enslave sapient beings, so the only plausible scenario in which there would be a large-scale conflict would be exactly a Terminator-style "evil machines try to kill everyone because they're evil."
I think you mean sentient; sapient means something quite different.
Yes. Sentience is the ability to feel vs sapience is the ability to think. And I meant exactly what I said.
And the Federation has been known to hover around the border in using/treating sentient beings as 'tools', as can be seen from the aforementioned episode examples.
If the Federation treated the EMH's as tools, they'd have been deleted when they were decommissioned, instead of reassigned for other jobs.
And no, a Terminator style 'evil machine' scenario is not required, if you had actually read what I posted in regard to the suggested scenario...
Yes, it is and nothing you posted suggested otherwise.
The Federation would not knowingly enslave sapient beings, so the only plausible scenario in which there would be a large-scale conflict would be exactly a Terminator-style "evil machines try to kill everyone because they're evil."
I think you mean sentient; sapient means something quite different.
Yes. Sentience is the ability to feel vs sapience is the ability to think. And I meant exactly what I said.
And the Federation has been known to hover around the border in using/treating sentient beings as 'tools', as can be seen from the aforementioned episode examples.
If the Federation treated the EMH's as tools, they'd have been deleted when they were decommissioned, instead of reassigned for other jobs.
And no, a Terminator style 'evil machine' scenario is not required, if you had actually read what I posted in regard to the suggested scenario...
Yes, it is and nothing you posted suggested otherwise.
Ok...
(Backs away slowly). Nice talking to you
0
rattler2Member, Star Trek Online ModeratorPosts: 59,174Community Moderator
Robots or holograms... you're talking about an AI uprising.
I can't take it anymore! Could everyone just chill out for two seconds before something CRAZY happens again?!
The nut who actually ground out many packs. The resident forum voice of reason (I HAZ FORUM REP! YAY!)
normal text = me speaking as fellow formite colored text = mod mode
Robots or holograms... you're talking about an AI uprising.
Actually what I was suggesting (based on the rather benign episodes I gave as an example) was a 'rebellion'.
For instance, exocomps/Holograms/Androids refusing to carry out their jobs, if they where not given equal rights (or even 'better' rights).
For instance in the case where Data was told he was the 'property' of starfleet, because he was made of inorganic material, he argued he had the right to choose his own destiny, as a sentient being, and resign from Starfleet - this right was originally refused, until his 'trial'.
The OP suggested some kind of evil AI war (Terminator style), I suggested a more thoughtful approach that would not need to involve 'evil' AI, but more akin to the those (aforementioned scenarios) which explore the definition of self awareness/self determination.
The OP refused to interact with me on that suggestion and is adamant an 'evil AI war' is the only possibility. But it's their thread, and they can do as they wish
Well Data resigning was deemed as illegal (not allowed). Despite the fact he signed up to Starfleet of his own free will. He was not allowed to resign from his position as Starfleet decided he was 'property', and he then rebelled.
If you wish to argue semantics, then thats fine, you're right. I was hoping for a more in depth discussion regarding AI, ect refusing to work as 'property'.
We already have the Borg as Trek's version of Saberhagen's Berserkers, only assimilating all life instead of extinguishing it. There isn't much difference to the victims between dying and become drones unless they are liberated at some point.
Dealing with an AI strike might make for an okay non-violent one-off side story mission. On TV it would probably be one of the comedy episodes as everyone must deal with the lack of their morning coffee once the replicators refuse to materialize it.
Even in 2409 holo-emitters are rare according to the STO bonnie kin episode so a holo-rebellion might not involve enough holograms to become a proper war.
Y'all might want to calm down a bit in here. Thanks.
Star Trek Online Volunteer Community Moderator and Resident She-Wolf
Community Moderators are Unpaid Volunteers and NOT Employees of Gearbox/Cryptic
Views and Opinions May Not Reflect the Views and Opinions of Gearbox/Cryptic
> @equinox976 said:
> Well Data resigning was deemed as illegal (not allowed). Despite the fact he signed up to Starfleet of his own free will. He was not allowed to resign from his position as Starfleet decided he was 'property', and he then rebelled.
>
> If you wish to argue semantics, then thats fine, you're right. I was hoping for a more in depth discussion regarding AI, ect refusing to work as 'property'.
>
> I'll leave it there.
See the thing you are missing there is that Data appealed the ruling and won the case. This established that he was legally able to resign and established legal precedent for future cases. The end reslut of the ruling was that Data is not property he is a person and thus has a legal right to resign. He was not seen as rebellious, he was a Starfleet Officer pursuing his legal options. The same JAG officer that said he couldn't resign declared he had a right to appeal as well.
As I said this established legal precedent in the Federation for future cases involving AI, like Data they are people. Now just as Roe v. Wade is wrangled and argued about (sorry for the RL reference but its relative) decades later Data v. Maddox will see challenges to the precedent like The Doctor's case against his publisher.
Thanks for the in depth response, and I don't mind the RL references, they are a good example in order to get your 'head around' the scenario we're discussing.
In regard to Data as a 'test case', in the later seasons where Data has a 'daughter' Picard makes direct reference (The Offspring) to that case, and refers to it's result, as only applying to Data himself, not other androids, or the offspring of Data - in that they did not share the rights granted to Data from that legal case. (Correct me if I'm wrong).
But before we go too deep into the ethical/philosophical area's of the topic - my main point was (in regard to the OP) that the idea does not necessarily need to go off into 'deep end' of 'evil AI' and retread that old trope, but rather explore those 'what if's' left over from those episodes;
'What if' Data did have some offspring, would that ruling be overturned, or apply to ALL 'android offspring'
'What if' The Doctor decided to send out a subspace message to all those using the same holographic matrix, and tell them he can lead them (non violently) to something better?
My original response (which was refuted by another poster) that we can avoid the cliche of 'evil AI' in regard to creating something a bit different in regard to storytelling, could have some merit.
The whole idea I was trying to present was for the 'softer touch' that Star Trek often brings to story telling, could be used to avoid retreading 'Terminator' or 'The Matrix'.
But then again, I may be rambling, as I'm half way down this bottle of Jack Daniels, and should probably go to bed soon
Robots or holograms... you're talking about an AI uprising.
Actually what I was suggesting (based on the rather benign episodes I gave as an example) was a 'rebellion'.
For instance, exocomps/Holograms/Androids refusing to carry out their jobs, if they where not given equal rights (or even 'better' rights).
For instance in the case where Data was told he was the 'property' of starfleet, because he was made of inorganic material, he argued he had the right to choose his own destiny, as a sentient being, and resign from Starfleet - this right was originally refused, until his 'trial'.
The OP suggested some kind of evil AI war (Terminator style), I suggested a more thoughtful approach that would not need to involve 'evil' AI, but more akin to the those (aforementioned scenarios) which explore the definition of self awareness/self determination.
The OP refused to interact with me on that suggestion and is adamant an 'evil AI war' is the only possibility. But it's their thread, and they can do as they wish
Honestly...I wasn't thinking Terminator. The Doctor's holonovel and also the Hirogen holograms weren't looking for war, just freedom. It would be more like an uprising, but would that be considered a war by Starfleet and everyone who had ever created a hologram? Would Zimmerman be their Kahless, or public enemy number one?
As far as STO is concerned, self-aware holograms were already declared to be people by the courts in 2394. Androids, I suppose, haven't been tried since Data's case since there aren't many around but would likely go the same way.
And since in STO's time mobile emitters are standard and holograms aren't tied to external hardware anymore, I don't see any way in which Starfleet or the Federation would react to a hologram/android or group thereof deciding to go their own way any more violently than to humans doing the same.
Unless they were bonkers evil murder-machines trying to destroy all the meatbags.
Comments
It's been 'done' quite a few times true. But I think a large scale rebellion that looks at the free will/ethical issues of AI/Holographic tech (similar to The Measure of a Man (TNG), Exocomp sentience (The Quality of Life, TNG) and Holographic sentience (Ship in a Bottle, TNG, 'Photon's be free' (VOY).
Rather than the 'Terminator' or 'Matrix' scenario's there could be a more thoughtful Trek' based spin in it that looks at the issues raised from the aforementioned episodes, on a large scale, rather than an individual basis.
I think you mean sentient; sapient means something quite different.
And the Federation has been known to hover around the border in using/treating sentient beings as 'tools', as can be seen from the aforementioned episode examples. (If you watch Data's 'Trial', even then the rights of AI was not clearly defined, and the judge refused to make any widespread changes to Federation law regarding AI).
And no, a Terminator style 'evil machine' scenario is not required, if you had actually read what I posted in regard to the suggested scenario...
If the Federation treated the EMH's as tools, they'd have been deleted when they were decommissioned, instead of reassigned for other jobs.
Yes, it is and nothing you posted suggested otherwise.
Ok...
(Backs away slowly). Nice talking to you
normal text = me speaking as fellow formite
colored text = mod mode
Actually what I was suggesting (based on the rather benign episodes I gave as an example) was a 'rebellion'.
For instance, exocomps/Holograms/Androids refusing to carry out their jobs, if they where not given equal rights (or even 'better' rights).
For instance in the case where Data was told he was the 'property' of starfleet, because he was made of inorganic material, he argued he had the right to choose his own destiny, as a sentient being, and resign from Starfleet - this right was originally refused, until his 'trial'.
The OP suggested some kind of evil AI war (Terminator style), I suggested a more thoughtful approach that would not need to involve 'evil' AI, but more akin to the those (aforementioned scenarios) which explore the definition of self awareness/self determination.
The OP refused to interact with me on that suggestion and is adamant an 'evil AI war' is the only possibility. But it's their thread, and they can do as they wish
If you wish to argue semantics, then thats fine, you're right. I was hoping for a more in depth discussion regarding AI, ect refusing to work as 'property'.
I'll leave it there.
Dealing with an AI strike might make for an okay non-violent one-off side story mission. On TV it would probably be one of the comedy episodes as everyone must deal with the lack of their morning coffee once the replicators refuse to materialize it.
Even in 2409 holo-emitters are rare according to the STO bonnie kin episode so a holo-rebellion might not involve enough holograms to become a proper war.
Views and Opinions May Not Reflect the Views and Opinions of Gearbox/Cryptic
Moderation Problems/Issues? Please contact the Community Manager
Terms of Service / Community Rules and Policies / FCT
Facebook / Twitter / Twitch
Was given a bottle of Tennessee Fire (Jack Daniels) as I'm celebrating some good news (and have a long weekend starting today:))
My mouth is probably running louder than usual, so I've edited my posts; apologies (to all) for being overzealous!
Thanks for the in depth response, and I don't mind the RL references, they are a good example in order to get your 'head around' the scenario we're discussing.
In regard to Data as a 'test case', in the later seasons where Data has a 'daughter' Picard makes direct reference (The Offspring) to that case, and refers to it's result, as only applying to Data himself, not other androids, or the offspring of Data - in that they did not share the rights granted to Data from that legal case. (Correct me if I'm wrong).
But before we go too deep into the ethical/philosophical area's of the topic - my main point was (in regard to the OP) that the idea does not necessarily need to go off into 'deep end' of 'evil AI' and retread that old trope, but rather explore those 'what if's' left over from those episodes;
'What if' Data did have some offspring, would that ruling be overturned, or apply to ALL 'android offspring'
'What if' The Doctor decided to send out a subspace message to all those using the same holographic matrix, and tell them he can lead them (non violently) to something better?
My original response (which was refuted by another poster) that we can avoid the cliche of 'evil AI' in regard to creating something a bit different in regard to storytelling, could have some merit.
The whole idea I was trying to present was for the 'softer touch' that Star Trek often brings to story telling, could be used to avoid retreading 'Terminator' or 'The Matrix'.
But then again, I may be rambling, as I'm half way down this bottle of Jack Daniels, and should probably go to bed soon
Honestly...I wasn't thinking Terminator. The Doctor's holonovel and also the Hirogen holograms weren't looking for war, just freedom. It would be more like an uprising, but would that be considered a war by Starfleet and everyone who had ever created a hologram? Would Zimmerman be their Kahless, or public enemy number one?
And since in STO's time mobile emitters are standard and holograms aren't tied to external hardware anymore, I don't see any way in which Starfleet or the Federation would react to a hologram/android or group thereof deciding to go their own way any more violently than to humans doing the same.
Unless they were bonkers evil murder-machines trying to destroy all the meatbags.