I think I tend to be much more conservative in my reaction to game events than a lot of other players, which is both a positive and a negative. Often I see players going, "We must do something!" when we lose pp or don't win some rolls or whatever. I tend to be much more of the, "But let's let it ride and not be so quick to change course," school of thought.
Sure, let's see how the tag system actually looks in 2324 and if a high-Presence ship is still attractive.
Maybe so, but it's not automatically a bad idea by any stretch of the imagination. Diplomacy
is supposed to be one of the Federation's strong suits, and the difficulty level of the diplomatic game has skyrocketed in the past few years.
I'm not sure that a change in focus of the ship's functions is really within the spirit of the refit system. I'm sure you're able to technically get there by swapping out parts on a Kepler, but I think for game balance it should probably be treated as a new ship class with requisite prototyping requirements. Making all those new parts fit together with a diplomatic focus seems like it should require fundamental design testing.
I don't know. It sounds like most of the modifications are concentrated in specific parts of the ship, removing specific systems and putting other systems in their place that are usually pretty well-behaved (like a recreational facility that
does not include a holodeck. ).
I mean, some of the refit projects we're contemplating are far more comprehensive. The
Excelsior-B,
Renaissance-A, and
Ambassador-A refits all involve ripping out large portions of the ship's main armament, power systems, and possibly computer cores, to make room for entirely new systems that are relatively untried.
If we can do all of that with a refit, scooping out half a
Kepler's lab section and replacing it with a cruise ship style promenade deck shouldn't be that hard.
It's not like refits
don't reflect some serious design work on our part, after all; it's just that this work takes place in the background and mostly involves known interactions with known systems so that there's less need for prototype test bed ships.
Do remember that clearing tags is not a series of straight Presence checks. It may be more weighted to P-checks than other tables, but other stats are still very important too and will often come up.
Yes, but a Presence-weighted
Kepler variant will still have as good a Defense and Science score as many other ships likely to appear in task forces. It's relatively weak at Combat/Hull/Shield, but that's simply a reason to not send it into task forces where it's highly likely to get shot at.
Also, I guess I'm ambivalent about having the tags created as a pacing mechanism and then we push to overcome that pacing mechanism and make it go away as soon as possible.
Well, the thing is, the pacing mechanism is still a powerful obstacle in our path. It's
so powerful that we have reason to push against it, because if we don't actively push agianst it we're going to have effectively zero new members joining the Federation between now and some time in the 2340s. I don't think we as players are obliged to accede THAT far to the "whoa slow down" pushback.
Honestly, what worries me is that it's probably not telepathic monitoring but rather something that can actually fail. Otherwise this person would have never gotten as far as the Federation, they'd have dropped on him the second he started planning his escape.
Another issue that occurs to me is that even with the best will in the world, you have to account for two different kinds of "failure" when designing a system that detects a problem with people and treats it.
One is the false negatives- you don't spot a failure that
should have been spotted, and something bad happens. At best, an untreated problem. At worst, with a system like the Harmony's, someone goes on to be a serial killer.
The other, though, is the false positives. If you spot failures that don't exist, you end up wasting time and resources
...
As an example of why this is an issue if you have a systematic "pre-crime detection" system... well, at a rough Google-based estimate, the US has somewhere in the neighborhood of one active serial killer per ten million people. Suppose we had a system that was 99.99% reliable at catching serial killers (
ridculously good), and 99.99% effective at correctly looking at a non-serial-killer and saying "this is not a serial killer." (
Also ridiculously good)
We'd rapidly catch all thirty or so active serial killers, and every time a new one appeared we'd catch them too. The thing is, we'd also catch the 0.01% of the population who are not serial killers, but who the test identifies as serial killers anyway. And that 0.01% of the population would add up to about three thousand people. So at any one time, the number of people being mistakenly identified as serial killers by our test would outnumber the real serial killers
one hundred to one.
Our test, which is already
ridiculously good, would have to be 100 times more effective at rejecting "not a real serial killer" people from consideration to even bring the ratio down to 50/50. It would have to have literally a
one in ten million chance of looking at a person who is not actually a serial killer, and misidentifying them as one. The odds of this test falsely accusing you would have to be lower than the lifetime odds of being struck by lightning or winning the lottery (not being struck or winning on any one day, but
lifetime odds).
Even then, of our population of people who are identified as serial killers by the test... half of them are not serial killers. Which is probably unsatisfactory if we're determining which people we need to have taken away and locked in padded rooms forever.
It is very probable that in the process of making our test strict enough that it
DOESN'T finger innocent people, we wind up making it strict enough that some of the real serial killers slip through the test due to not fitting our profile.
...
So there's a tradeoff between false positives and false negatives, further complicated by the self-fulfilling prophecy effect we've already talked about. And even if we use traits less rare and unlikely than "being a serial killer" such as, well, "generic psychosocial deviancy" or whatever... The fundamental issue is still there.
If "deviants" of whatever kind make up, say, 1% of the population... A test that is 100% reliable at catching "deviants" and flagging them for "treatment" will probably scoop up something like 0.5% or 1% or even 2% of the non-"deviant" population as well.
...
Incidentally this is also a problem in real life; I remember reading a blog that discussed this and pointed out that due to the statistical consequences of the limits of testing accuracy and the sheer number of people being tested, it was entirely possible to create a situation where most people
with a certain mental disorder weren't getting medication for it, while most people who
were getting the medication didn't have the mental disorder,
AT THE SAME TIME.
This despite the medication being genuinely effective at treating the disorder!