I feel like Janus knows more about the subject than you. Given that he's confident enough to propose something like that.
In this case, the actual problem isn't "AI in general." It's Wower's recent discovery that anthropomorphizing a system tends to cause it to attain sapience.
This makes it quite challenging for us to straddle the line between "smart enough to be useful" and "smart enough to develop a personality." Because
there is no line, it's blurry.
A Palm Pilot is unlikely to attain sapience because it just doesn't have the necessary hardware performance. And a super-duper-ultra computer no one anthropomorphizes may not get there.
But in the intermediate range where we have, for example, Norm? In there, there's no clear bright line we can design into our systems to prevent them from attaining sapience if someone anthropomorphizes and interacts with them
enough.
As to why Janus and Ludivine haven't said so... Well, bluntly, Ludivine knows a lot but this is a recent discovery about AI theory that even she didn't know until a few months ago, and she probably still thinks about AI mechanistically, plus she's got "not invented here" syndrome and is probably a bit too arrogant to fully process the significance of Wower's breakthrough just yet. And Janus is even more arrogant and even more likely to think of intelligent beings (including AIs) as clay for him to reshape at will, along with being somewhat less naturally gifted than Ludivine.
They could just be
wrong, y'know.
Crit mechanics are going to be changed to solve the 'Crits past DC 150 are near impossible' and 'Bare failures are better than normal ones' issues
@Made in Heaven ... Uh... why is "bare failures are better than normal ones" a
problem? Wouldn't it seem reasonable and logical for a failure that rolls close to the DC to be worse than one that rolls far below the DC?
Why is this a problem you need to fix? Am I missing something here?