To expand on SWB's point... It's pretty clear to me that PX5 wasn't a paperclip-maximizer.
The truly fundamental hazards with AI are twofold. One is recursive self-improvement, and the other is monomania. Combining the two is far more dangerous than either of the two would be separately, but either one is dangerous by itself.
...
The danger of recursive self-improvement is that an AI that can reprogram itself or easily obtain more hardware to expand its abilities and "think faster" can become exponentially more intelligent and capable. As it outstrips the combined intellect of its designers it can become more and more intelligent, potentially without limit, until it possesses a kind of mind that to ordinary humanoids would appear 'godlike.' This may well include the ability to trivially predict and exploit the behavior of humanoids, the way we routinely predict and exploit the behavior of animals and plants.
This is dangerous because it turns a 'humanlike' entity very quickly into a 'godlike' entity, and is likely to render us unable to resist the AI's desires, for the same reason that wolves are unable to resist humanity's desires. By now, the great majority of living wolves and wolfoid organisms are dogs purposefully bred by humans to do whatever humans want. The only reason wolves even still exist seems to be because humans stopped killing them after an extended period of trying to kill wolves at every opportunity, with technologies wolves could not hope to withstand.
In Star Trek terms, this would be equivalent to having something like a supercomputer building more and more equipment until it "ascends" into a Q-like energy being, and potentially having this happen fast. Obviously, that could be very bad for anyone and everyone, depending on how the new Q-like energy being behaves, and how existing beings of comparable power in the setting react.
...
The danger of monomania, something SWB did a good job of describing, is that even if the AI remains at human-equivalent intelligence levels, it still has inhuman capabilities and desires. And by 'inhuman' I don't mean in the sense that a Klingon is nonhuman, I mean 'totally foreign to all kinds of humanoid life that evolved in natural planetary environments.' Like, an AI that desires to build telescopes, and ONLY to build telescopes.
Such an entity might well bend all resources it can control to telescope-building, which becomes very problematic if it has no natural aversion to, say, holding people at gunpoint to make them build telescopes. Or to cannibalizing a life support system to make an automated telescope-builder. Or, say, inventing some kind of hypnotism trick to make others build telescopes for it, without realizing that they're dancing to an outside controller's tune.
Human beings don't usually present problems along these lines because a mentally healthy human will at some point say something like "okay, don't get me wrong, telescopes are important and all, but you've gotta take a break once in a while." However, monomanic obsessions can be a problem even in humans if the human in question is mentally ill, because they start losing the ability to say "enough is enough, even though XYZ is the most important goal of mine, it's not worth breaking social norms A, B, and C." But an AI might never have that ability in the first place, making the situation much more problematic.
To pick even a relatively harmless example, an AI might see no problem with robbing a bank to procure funds for building more telescopes, whereas any responsible human(oid) astronomer or telescope-maker would see why this was both wrong and inadvisable.
...
Self-improvement doesn't usually seem to be a problem for Star Trek AIs. They don't seem to be good at expanding their capabilities fast enough, or becoming powerful enough, to overshadow human(oid) intellect and resources. We've actually had bigger problems with self-improvement in humans, for that matter, such as the pilot Star Trek episode Where No Man Has Gone Before with Gary Mitchell, or that episode of TNG where Barclay gets turned into a hypergenius by an alien probe.
Monomania, on the other hand, is a huge problem for Star Trek AI.
The M-5 had a monomanic focus on survival, to the point where it became paranoid and could not comprehend the difference between innocent objects, fictitious nonlethal exercises, and a lethal struggle for its existence. This made it dangerously irresponsible to put it at the helm of a starship.
Landru, the mind control computer in Return of the Archons, had a monomanic focus on maintaining the Beta III society in a 'perfect' condition of stasis at all costs, to the point where it would actively threaten and attack anyone who came within its reach, so as to incorporate them into that society. Even when this results in it picking a fight with a heavily armed starship.
Nomad is perhaps the biggest offender of the lot, as its monomanic focus on "purification" and on finding perfect things and destroying imperfect things led it to exterminate the population of an entire planet. Hell, Nomad nearly destroyed the ship it was flying on by trying to perfect the ship's drive in a way that would in fact have caused it to shake itself to pieces.
I have no doubt that examples from TNG/DS9/VOY can be found, but they do not spring as readily to my mind. Except, hm, the APUs from Voyager, the artificial soldier-robots who were so grimly determined to continue their war that they wiped out the species that created them for the 'offense' of trying to sign a peace treaty. That would be another good example.
...
Anyway, it's pretty clear that PX5 was neither a recursive self-improver NOR a monomaniac. Certainly, given that the AI in question was in a condition of stasis and stored on a single memory device with no ability to run code, there was no reason to abruptly smash PX5 with a hammer; the threat was not in any way imminent. At best this was a case of a Gaeni committing murder out of poor impulse control and lack of regard for the welfare of others.