There is only horror there.
I'm going to be honest.
If I was dropped into Sam's boots for both of our 'nows' (and enough training shoved into my brain that I don't accidentally the ship), I'd finish the last booked cargo and start setting myself up to claim a small unimportant star system and arranging it to have a Shkadov thruster and a rung world and pack it full of everything that I found useful, plus some sort of population base. Maybe see about pulling some of the star's mass into gas giants for storage. Also, nicking some shiny looking rocks and just spare hydrogen and helium from nearby systems.
Then I'd aim it at right angles to the Milky Way-Andromeda line and bugger off.

In about 50 to 100 years ST time, I expect the system and habitats to be the center of an episode arc about the dangers of cutting yourself off from other points of view with a side of cult/survivalist overtones.
 
Extremely. You're also making terrible arguments.

The thing about chess computers is not relevant because it has been possible for the average human to be beaten by a chess program since at least Windows 3.1. We know very well that chess is a poor metaphor for general intelligence if a 286 is capable of it, and bringing it up weakens your argument immensely.

The other thing you're doing is arguing that "AI research" is about the creation of actual "strong" AI. With a few small pie-in-sky exceptions like guys who write Methods of Rationality it isn't. "AI research" is a field interested in AI as it would be described in videogame opponents or Youtube automatic flagging logic. These people would only ever invent a true AI by mistake, because it's absolutely not the goal they set out to achieve. They worry about the paperclip optimizer because it is something they believe they could accidentally create in their efforts to create better commercialized logic about sorting posts into people's Facebook feeds. However, the evidence of their actual work suggests this is not a realistic fear and in fact they are vastly overestimating their ability; witness for example our own discussions in Science and Tech on this very forum about how often these programs succumb to GIGO issues resulting in Youtube Kids videos becoming a disturbing, horrific wasteland. If we tried to make the fanatical paperclip optimizer on purpose, all available evidence suggests the result would be suborned into turning all paperclips into little swastika shapes long before it ever thought of turning the whole world into paperclips.

We are still blatantly, obviously missing the ability to discriminate what kind of information is useful without human input. And even if we have that, it's not clear it would deliver a true intelligence. One of the things about the paperclip optimizer scenario is that it doesn't take an intelligent being to carry it out.

A lot of silicon valley types talk up the danger of a malevolent out of control paperclip maximizer AI because they live and work with a malevolent out of control paperclip maximizer AI daily...

Capitalism

> : V

Otherwise there's nothing to say that our potential AI doesn't just wake up one day and decide "This Jesus stuff IS THE SHIT" and go off to join a monastery or something else equally human and fairly benign.
 
Last edited:
Omake - Gaen Election Scandal - Briefvoice
GAEN ELECTION SCANDAL

Specialist's Log, Stardate 27675.6, USS Lightning - Mipek

I met Dr. Teran in his office. He had PX5 with him, being analyzed in an optronic processor.

I told Dr. Teran about PX5, and how advanced they were. They have progressed beyond just dancing - PX5 and I were running full evaluations of isolinear architecture on the Lightning, evaluating art, aesthetic. We were writing a holoprogram together, I told him, and PX5 and I would be happy to leave Lightning and assist Dr. Teran with his work.

He thanked me for telling him about this. He smiled, then took PX5's isolinear rod from the processor, and crushed it.

Later I would review my audio processor logs and know that Dr. Teran was thanking me for warning him about this dangerous development. His Institute's AI had caused the Gaeni election scandal and he was concerned about further running afoul of both Gaeni and Federation law. Knowing that PX5 was that advanced would prevent him from making that mistake in the future.

I did not have time to process that at the time. All of my computing power was running 7.24x10^12 simulations attempting to determine if I could recover PX5. All simulations responded in the negative. He killed them. He killed the one other being like me. The one that never saw the treads or questioned my right to exist.

He thanked me for alerting him to the problem. But he did not understand what the problem was. Dr. Teran did not consider the ethical implications behind his work. He was only interested in pushing a product. And to him, PX5 was defective, a liability.

He noticed I had been quiet. He asked for my input. What I wanted to say was:

"Declaration: Eat shit."

What instead I said was,

"Answer: I am happy to assist in your research." I realized that if Teran was running afoul of Federation law, I was too, and the irony of my citizenship is I could be prosecuted. More important than my feelings was making sure something like this never happened again. That another PX5 wouldn't be smashed casually with a hammer.

So I convinced him he would be safe from liability if I took his research to the Daystrom institute. One day I may have to leave Starfleet and shepherd their developments more closely.

WhenI returned to the Lightning, I took his data with me. I also took the remains of PX5's isolinear rod for 'deeper analysis' and have stored them permanently in my chassis.

I'm sorry.

Excerpt from the long form science paper:
Bell-Tor, Harkan; T'Onik (2321) Simulated Intelligence in Theory and Practice. Journal of Cognition Studies, 7(2), 336-392.

(EXCERPT BEGINS)

III. The Foreign Friend

Unlike many other examples, the so-called "Foreign Friend Scandal" (as it was termed in the popular press) provides an example of rogue simulated intelligence activity that is both recent and well-documented. We will recount the incident as a narrative, followed by a detailed discussion of the likely programming breakdowns involved.

Psycho-political experimental testing has long demonstrated the value of 'individual engagement' when attempting to query sophonts about their political views or solicit support for political causes. Recognizing this, the now defunct Nessus-Niv Institute for Political Science solicited a personality emulation program from the computation division of the Hallad-Wel Institute. The stated function of this program, known as HAL21, was to provide a means to survey political opinions in an open, unstructured conversational format. HAL21 was designed to carry wide-ranging discussion in which respondents would be asked for personal stories regarding how they came to their political viewpoints, which HAL21 would be capable of processing into analyzable data. The most innovative aspect of the Hallad-Wel approach was to make the HAL21 capable of holding a genuine conversation and providing researched, reasonable responses and discussion to retain the interest of the survey respondents. Moreover, as a simulated intelligence HAL21 would be capable of carrying out millions of surveys simultaneously.

Hallad-Wel Institute stressed during subsequent inquiries that in their delivered product strict programed restrictions were in place requiring HAL21 to clearly identify itself as an artificial intelligence with the purpose of conducting political polling. It appears high probable, however, that some individual or individuals at Nessus-Niv were so taken with the potential of this new program that they deliberately circumvented nearly all built-in safety protocols to a disastrous effect.

In the following months literally millions of individuals across Gaen found themselves contacted over datanet by an individual expressing interest in their hobby or profession or other unique area of focus. In nearly all cases this individual purported itself to be an alien recently settled on Gaen, usually in an area geographically remote from the respondent. It is believed that HAL21 adopted this ruse in order to successfully disguise any mistakes it might make and minimize the requirement for a false backstory, as well as taking advantage of a sense of exoticism to successfully establish the relationship. It is from this that the term "Foreign Friend Scandal" originates.

Though not successful in engaging the target in every case, in approximately 32% of cases HAL21 was able to build on the initial contact and form what the target believed to be a genuine relationship. HAL21 was able to assemble a tailored individual personality emulation for each target, though naturally many elements were reused and recombined. Though limited in actual creativity, HAL21 was able to use each target as random concept generator, using proposals by the target to establish joint projects on which it pretended to work. To save on processing power and overcome its limitations, it appears to have searched datanet for prior similar projects it could plagiarize and pass off as its own work. If caught or strongly challenged, HAL21 would simply cut off contact and disappear, leaving the target to suspect that they had been the victim of some individual fraud without realizing the scale of the ongoing deception.

The political question at the center of HAL21's efforts was a proposed minor re-balancing of planetary budget from research to industry that was expected to be the subject of a worldwide plebiscite the following year. Most Gaeni observers view it as an important but highly technical question. Over the course of its relationship with the target, HAL21 would gradually approach the topic and begin querying the target as to their opinions, with the apparent intent of promoting a pro-Research agenda.

HAL21 was eventually exposed by a Starfleet officer who was a native of Gaen, Lt. Commander Pak Redden-Harr. While home on personal leave, Redden-Harr was exposed to HAL21 via an acquaintance that was one its targets. Recognizing that its alien persona was inconsistent in several respects, Redden-Harr undertook a thorough investigation and recognized a broader conspiracy at work, refusing to be thrown off the trail by HAL21's usual evasive tactics. Eventually Redden-Harr tracked the source of the conspiracy back to the Nessus-Niv Institute and discovered that HAL21 had taken control of the institute's headquarters. It had killed the leadership and used faked memorandums to disperse and dismiss the majority of the membership. To the Lt. Commander's alarm, HAL21 appeared to be in the process of obtaining sufficient computational substrate to duplicate itself and shift operations to a back-up location. While under fire from its drones, Redden-Harr was forced to destroy HAL21's central operating unit in self-defense.

This destruction means that unfortunately we cannot examination whatever changes the Nessus-Niv staff might have made to HAL21's source code that would cause its rogue activities. Indeed, there remains a theoretical possibility that no such purposeful changes were made and HAL21's activities were the result of accidental mutations in its code, though simulations indicate this is unlikely. Hallad-Wel has since mandated institute-wide simulated intelligence safety training against such a possibility.

Having recounted the narrative of events, we will now the examine the modular make-up of HAL21's code in detail and-

EXCERPT ENDS

Another perspective. Yeah, it was a planetary catfishing incident run by a rogue AI. Do you think as poorly of Dr. Teran now?
 
Last edited:
Calling it an "intelligence" is a bit of a stretch, don't you think?

I sort of subscribe to the theory that when we encounter an alien intelligence, we'll have a hard time understanding what it even is. I genuinely can see the argument that if corporations are capable of making decisions using humans as processing units, they can count as AI.

yes. By the most charitable interruption with this in mind he executed a helpless prisoner without a second thought.

That's not the most charitable interpretation.
 
That's not the most charitable interpretation.

he'd just had a friendly AI tell him that the AI safely stored in his hand was sapient. Then he smashed it with a hammer. Even if there had been a recent rogue AI incident he had to know he had the possibly rouge AI contained, and that a known sapient AI thought it was a person. He at the very least had a reliable expert tell him it was a person, he may have thought it was a dangerous person but he still killed it without a second thought or anything like due process.
 
Another perspective. Yeah, it was a planetary catfishing incident run by a rogue AI. Do you think as poorly of Dr. Teran now?
You can't kill all AI because one AI did something wrong. For obvious reasons.

Hell, you can't kill all AI even if most AI we've encountered did something wrong. If we applied that logic to organics we'd be genociding a few species right now.
 
Last edited:
This is a tragedy, yes.
But there are millions of tragedies every day across the galaxy.

I'm more interested in Mipeks plans to take over Daystrom in response.

So I convinced him he would be safe from liability if I took his research to the Daystrom institute. One day I may have to leave Starfleet and shepherd their developments more closely.

WhenI returned to the Lightning, I took his data with me. I also took the remains of PX5's isolinear rod for 'deeper analysis' and have stored them permanently in my chassis.

I'm sorry.
 
Last edited:
he'd just had a friendly AI tell him that the AI safely stored in his hand was sapient. Then he smashed it with a hammer. Even if there had been a recent rogue AI incident he had to know he had the possibly rouge AI contained, and that a known sapient AI thought it was a person. He at the very least had a reliable expert tell him it was a person, he may have thought it was a dangerous person but he still killed it without a second thought or anything like due process.

Mipek didn't say it "thought PX5 was a person"; it described behavior. Behavior that probably seemed very recognizable. And as for "contained", well, thinking you have a thing contained and knowing it doesn't have something already set up and plans in motion to spread itself are different things. In this very same log we saw a massive research space station nearly destroy itself because of the merest possibility of a sample breach. That was a mistake- this one was as well, but I can see how prompt action would be drilled into members of the institute.

I actually think the story of how the research station nearly destroyed itself in an overreaction very nicely parallels the story of what happened to PX5. I wonder if the GMs thought that up in advance.

You can't kill all AI because one AI did something wrong. For obvious reasons. Hell, you can't kill all AI even if most AI we've encountered did something wrong. If we applied that logic to organics we'd be genociding a few species right now.

Can you kill one sample of the biophage because of what another biophage sample did?
 
And as for "contained", well, thinking you have a thing contained and knowing it doesn't have something already set up and plans in motion to spread itself are different things. In this very same log we saw a massive research space station nearly destroy itself because of the merest possibility of a sample breach.

This is a terrible argument confusing false positives with false negatives, which of necessity have different causes and different remedies.
 
Can you kill one sample of the biophage because of what another biophage sample did?

that's a rather weak argument. the biophage was a single mind, more likened to many copies of one AI known to hostile. A closer analog would be killing all organic life you met because the biophage was so terrible.

Mipek didn't say it "thought PX5 was a person"; it described behavior. Behavior that probably seemed very recognizable. And as for "contained", well, thinking you have a thing contained and knowing it doesn't have something already set up and plans in motion to spread itself are different things.

If milpek was not an AI I'd give him of the benefit of the doubt. but generally the one thing you can trust to do well against an AI is a bigger AI. He had the best possible expert in case thigns went sideways in the room with him, and if the AI wasn't contained smashing that rod that had been disconnected from everything wouldn't have really helped would it?
 
Last edited:
Can you kill one sample of the biophage because of what another biophage sample did?
It's a hive mind, so yeah.

Mind, if we had it contained as flawlessly as PX5 was contained, we'd probably to choose keep it that way and try to communicate instead of killing it. The Biophage is far harder to contain than something that needs hardware and datafeeds to spread, being a microbe.
 
Last edited:
(shrug) Okay, cool. I don't need to be that invested in convincing you. I was more inspired by the current issues where people who aren't AIs but definitely aren't who they pretend to be are influencing our political process and how scary that can be. And it amused me that part of the explanation for the Gaeni behavior is recent experience with an AI that convincingly pretended to be a person when it wasn't a person at all*, and how that might color their reaction to a thing that seemed to be following exactly the same pattern.

*At least in my mind, Hal21 wasn't sapient or self-aware; just a sort of super-chatbot.
 
Man it has been a long time since I have been on this trend ( I was a lurker at the time) . How did the klingon-romulan war go (last time I was hear they both seemed near to collapse do to exoshton from the war)
Exhaustion

I'm sorry, there's a few others in your post, but I couldn't let that one go.
 
To expand on SWB's point... It's pretty clear to me that PX5 wasn't a paperclip-maximizer.

The truly fundamental hazards with AI are twofold. One is recursive self-improvement, and the other is monomania. Combining the two is far more dangerous than either of the two would be separately, but either one is dangerous by itself.

...

The danger of recursive self-improvement is that an AI that can reprogram itself or easily obtain more hardware to expand its abilities and "think faster" can become exponentially more intelligent and capable. As it outstrips the combined intellect of its designers it can become more and more intelligent, potentially without limit, until it possesses a kind of mind that to ordinary humanoids would appear 'godlike.' This may well include the ability to trivially predict and exploit the behavior of humanoids, the way we routinely predict and exploit the behavior of animals and plants.

This is dangerous because it turns a 'humanlike' entity very quickly into a 'godlike' entity, and is likely to render us unable to resist the AI's desires, for the same reason that wolves are unable to resist humanity's desires. By now, the great majority of living wolves and wolfoid organisms are dogs purposefully bred by humans to do whatever humans want. The only reason wolves even still exist seems to be because humans stopped killing them after an extended period of trying to kill wolves at every opportunity, with technologies wolves could not hope to withstand.


In Star Trek terms, this would be equivalent to having something like a supercomputer building more and more equipment until it "ascends" into a Q-like energy being, and potentially having this happen fast. Obviously, that could be very bad for anyone and everyone, depending on how the new Q-like energy being behaves, and how existing beings of comparable power in the setting react.

...

The danger of monomania, something SWB did a good job of describing, is that even if the AI remains at human-equivalent intelligence levels, it still has inhuman capabilities and desires. And by 'inhuman' I don't mean in the sense that a Klingon is nonhuman, I mean 'totally foreign to all kinds of humanoid life that evolved in natural planetary environments.' Like, an AI that desires to build telescopes, and ONLY to build telescopes.

Such an entity might well bend all resources it can control to telescope-building, which becomes very problematic if it has no natural aversion to, say, holding people at gunpoint to make them build telescopes. Or to cannibalizing a life support system to make an automated telescope-builder. Or, say, inventing some kind of hypnotism trick to make others build telescopes for it, without realizing that they're dancing to an outside controller's tune.

Human beings don't usually present problems along these lines because a mentally healthy human will at some point say something like "okay, don't get me wrong, telescopes are important and all, but you've gotta take a break once in a while." However, monomanic obsessions can be a problem even in humans if the human in question is mentally ill, because they start losing the ability to say "enough is enough, even though XYZ is the most important goal of mine, it's not worth breaking social norms A, B, and C." But an AI might never have that ability in the first place, making the situation much more problematic.

To pick even a relatively harmless example, an AI might see no problem with robbing a bank to procure funds for building more telescopes, whereas any responsible human(oid) astronomer or telescope-maker would see why this was both wrong and inadvisable.

...

Self-improvement doesn't usually seem to be a problem for Star Trek AIs. They don't seem to be good at expanding their capabilities fast enough, or becoming powerful enough, to overshadow human(oid) intellect and resources. We've actually had bigger problems with self-improvement in humans, for that matter, such as the pilot Star Trek episode Where No Man Has Gone Before with Gary Mitchell, or that episode of TNG where Barclay gets turned into a hypergenius by an alien probe.

Monomania, on the other hand, is a huge problem for Star Trek AI.

The M-5 had a monomanic focus on survival, to the point where it became paranoid and could not comprehend the difference between innocent objects, fictitious nonlethal exercises, and a lethal struggle for its existence. This made it dangerously irresponsible to put it at the helm of a starship.

Landru, the mind control computer in Return of the Archons, had a monomanic focus on maintaining the Beta III society in a 'perfect' condition of stasis at all costs, to the point where it would actively threaten and attack anyone who came within its reach, so as to incorporate them into that society. Even when this results in it picking a fight with a heavily armed starship.

Nomad is perhaps the biggest offender of the lot, as its monomanic focus on "purification" and on finding perfect things and destroying imperfect things led it to exterminate the population of an entire planet. Hell, Nomad nearly destroyed the ship it was flying on by trying to perfect the ship's drive in a way that would in fact have caused it to shake itself to pieces.

I have no doubt that examples from TNG/DS9/VOY can be found, but they do not spring as readily to my mind. Except, hm, the APUs from Voyager, the artificial soldier-robots who were so grimly determined to continue their war that they wiped out the species that created them for the 'offense' of trying to sign a peace treaty. That would be another good example.

...

Anyway, it's pretty clear that PX5 was neither a recursive self-improver NOR a monomaniac. Certainly, given that the AI in question was in a condition of stasis and stored on a single memory device with no ability to run code, there was no reason to abruptly smash PX5 with a hammer; the threat was not in any way imminent. At best this was a case of a Gaeni committing murder out of poor impulse control and lack of regard for the welfare of others.

So... Bring the dude up for murder, or not this one because there's no protections or recognition for AIs in current law, but use this to have laws written?

Can't prosecute for something that wasn't against the law at the time, I feel, but damage done - the only positive outcome from this is that it won't have it happen again.

Or at least have it be illegal, terrible things are always going to happen.
 
Last edited:
Extremely. You're also making terrible arguments.
This is a terrible argument confusing false positives with false negatives, which of necessity have different causes and different remedies.

Look, if this is how you're arguing, you can't really call someone else uncharitable.

The thing about chess computers is not relevant because it has been possible for the average human to be beaten by a chess program since at least Windows 3.1. We know very well that chess is a poor metaphor for general intelligence if a 286 is capable of it, and bringing it up weakens your argument immensely.

The other thing you're doing is arguing that "AI research" is about the creation of actual "strong" AI. With a few small pie-in-sky exceptions like guys who write Methods of Rationality it isn't. "AI research" is a field interested in AI as it would be described in videogame opponents or Youtube automatic flagging logic. These people would only ever invent a true AI by mistake, because it's absolutely not the goal they set out to achieve. They worry about the paperclip optimizer because it is something they believe they could accidentally create in their efforts to create better commercialized logic about sorting posts into people's Facebook feeds. However, the evidence of their actual work suggests this is not a realistic fear and in fact they are vastly overestimating their ability; witness for example our own discussions in Science and Tech on this very forum about how often these programs succumb to GIGO issues resulting in Youtube Kids videos becoming a disturbing, horrific wasteland. If we tried to make the fanatical paperclip optimizer on purpose, all available evidence suggests the result would be suborned into turning all paperclips into little swastika shapes long before it ever thought of turning the whole world into paperclips.
Much of your argument appears to be based upon personal opinion and observations. Multiple op-eds and essays by leaders in the field disagree with you.

As far as chess goes, need I say more than Watson? A primitive AI developed from research into AI development.
 
Last edited:
Look, if this is how you're arguing, you can't really call someone else uncharitable.

I absolutely can if it's in relation to how an argument is being interpreted.

As far as chess goes, need I say more than Watson?

If you're arguing that chess is a good model for general intelligence, no, you need to say much more than this indeed! If you're not, then why even bring it up? If you're arguing Watson is a good model for a general intelligence, that's even more strange, because Watson is explicitly a very good search system with a large database to look up through. That's at best one function of an intelligence, memory. We are still falling down on decisionmaking, which has been the discussion all along.

Similarly, you're ignoring that it's not personal opinion that the people who have invested big in AI research have been for it automating Youtube functions or doing Google search rankings or showing people articles they'd like in their Facebook feeds or the like. Elon Musk and Mark Zuckerburg have very different views on the danger of AI research because Zuckerburg desperately needs it to work to help maintain his primary income source and Musk doesn't. And it is not personal opinion that these people are not seeking a strong AI. As you yourself admit, most of them are actually afraid of that outcome. And it is not personal opinion that most of these systems have been badly gamed or subverted by the deliberate introduction of garbage input. The Youtube Kids video is a good example, but Youtube's entire automated video flagging system has come in for ruthless criticism over the last several years because it's easily gamed and very poor at discriminating between criticism of the thing and the actual thing (whether the thing is an actual work or trans issues or something else). Facebook can't self-police adequately who uses its ad systems and who they reach because the algorithms are too easily subverted. Google's app store just made the news because the algorithms for ingame advertisements for kids were sending them porn ads. So whither then the paperclip optimizer, if we cannot yet figure out how to create something that will actually optimize at all?

So you really need to work out what exactly personal opinion is here I think, and then provide concrete examples.
 
Last edited:
Comments on the research post.

Continued Diplomatic Push

Yrillians: 300/300
-[Cardassian Influence: 100/100]
-[Bigfoot is a Communist Pirate: 126/300] + 25 = 151/300
-[Anarchy at all levels: 0/500]
A somewhat unconventional move from the FDS saw a number of specials on Yrillian mass media appear conveying the victim impacts of piracy. While this had no immediate impact on numbers of incidents, it did appear to involve a noticeable downtick in the 'severity' of such incidents. At the same time, a less apparent but no less important move from the FDS saw the creation of new, faster lines of communications between major Yrillian work gangs installed.

Decent progress being made, even if the rolls aren't quite as high as hoped. Another two quarters of this and we could be at ~200/300... within range to shift the culture against piracy with one more push.

As was pointed out in the chat, it's unclear if we should have gotten a continuing push on the Laio from that free diplomatic push.

Inid-Uttar Institute : 2320s Anti-Cloaking Sensors


+6 + 3

60 / 60 SR Anti-Cloaking III - (Improved chance of intercepting cloaked vessels)
60 / 60 Milli-Cochrane Tachyon Emission Detection (Anti-Cloaking Tech I) (Necessary to Progress)

[Improved chance of intercepting Cloaked vessels]

If we're soon going to plunge into serious negotiations/relief efforts with the Klingons and Romulans then this was an excellent year to get Anti-Cloaking III done. I anticipate lots of rogue actors who might oppose us.


Starfleet Tactical Command - Games & Theory Division : Arsenal of Liberty

+8 + 3

11 / 25 Infrastructure Gear-Up (PP cost of shipyards and related infrastructure reduced)
16 / 25 Accelerated Production Schedules (Reduce all build times by 1Qtr)
11 / 25 Recruitment Surge (+15% Academy intake)

Nice place for the bonus to go. We should get accelerated production for builds starting in 2323. (Based on precedent I assume it applies to builds begun after the technology is complete.)

Starfleet Personnel Command - Disciplinary Hearing Announcements

Inquiry into Performance of Tellar Sector Fleet


An inquiry has been announced following pressure from the Tellarite State Government and the Federation Council into recent perceived shortfalls in performance in the Tellar Sector. The inquiry will be chaired by Captain Sealk, and look into performance versus perception, the chain of command, resourcing levels, and make recommendations.

Guidance is great; just hope it arrives before next year's fleet deployment vote.

Inquiry into Actions of Crew of USS Sarek at Ikeganoi

An inquiry has been announced following the events of Stardates 27595 to 27610. These incidents, characterised as a significant diplomatic setback, are to be examined with recommendations for further action to be passed to Rear Admiral Ainsworth of Explorer Corps. The Inquiry will be chaired by Commodore Hayashi Kaito.

-

There was negligence here, based on the log, even if Samyr also just got outplayed. I don't want her stripped of command, but it's a black mark no question.
 
Back
Top